• 0 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle


  • SinAdjetivos@beehaw.orgtoMemes@lemmy.mlYouTube
    link
    fedilink
    arrow-up
    1
    ·
    10 months ago

    It depends on the how the contract is written but generally billing a client the full time to develop an existing feature that “could be turned on in 10 min.” is a good example of fraudulent misrepresentation. A business/industry that replies on that (like your example) is a racket.

    Yes, I understand that’s how the world of ‘software as a service’ works and yes I am calling it a racket.









  • The academic name for the field is quite literally “machine learning”.

    You are incorrect that these systems are unable to create/be creative, you are correct that creativity != consciousness (which is an extremely poorly defined concept to begin with …) and you are partially correct about how the underlying statistical models work. What you’re missing is that by defining a probabilistic model to objects you can “think”/“be creative” because these models dont need to see a “blue hexagonal strawberry” in order to think about what that may mean and imagine what it looks like.

    I would recommend this paper for further reading into the topic and would like to point out you are again correct that existing AI systems are far from human levels on the proposed challenges, but inarguably able to “think”, “learn” and “creatively” solve those proposed problems.

    The person you’re responding to isn’t trying to pick a fight they’re trying to help show you that you have bought whole cloth into a logical fallacy and are being extremely defensive about it to your own detriment.

    That’s nothing to be embarrassed about, the “LLMs can’t be creative because nothing is original, so everything is a derivative work” is a dedicated propaganda effort to further expand copyright and capital consolidation.


  • I partially agree with you, but I think you’re missing the end goal of Facebook et al.

    As HughJanus pointed out it’s not really any different than a person reading a book and by that reasoning using copyrighted material to train models like these falls well within the existing framework of “fair use”.

    However, that depends entirely on “the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes.” I agree completely with you that OpenAI’s products/business (the most blatant violator) does easily violate “fair use” due to that clause. However they’re doing it, at least partially, to “force the issue” on the open question of “how much can public information be privatized?” with the goal of further privatizing and increasing commercial applications of raw data.

    As you pointed out LLMs can only create facsimiles and not the original work, and by that logic they can’t exactly replicate the inputs either.

    No I don’t think artists can claim that they own any and all “cheap facsimiles” of their works, but by that same reasoning none of these models produced should be allowed to be the enforceable property of any individual/company either.

    For further reading check out:

    • Kelly v. Arriba Soft Corporation on why “thumbnails” (and by extension LLMs, “eigen-images”, etc.) are inherently transformatve and constitute fair use.
    • Bridgeport Music, Inc. v. Dimension Films for the negative impacts that ruling has had and how it still doesn’t protect the artists from their stuff being used for training and LLM.
    • “Variational auto-encoders” for understanding on how the latest LLMs actually do achieve a significant amount of “originality” and I would argue are able to be minimally creative.