• 0 Posts
  • 116 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • Man, I’ve had two separate devices fail to install updates the last week, leading to tons of weirdness and troubleshooting. I even had to chkdsk c: /F at one point like a neanderthal.

    I have enough coomputers laying around that I’d move more of them to other OSs, Linux included if I hadn’t tried that and found it as much or more of a hassle in those specific machines, be it compatibility issues or just fitness for the application. I’m not married to Windows at all, but there are definitely things that are much easier to handle there, which does justify sticking with it through the reinstalls and awkward weirdness on those.


  • For straight revenue, yeah, that’d be right. Technically everything else is a rounding error. But if Epic was one of those single game unicorns like Riot or Rovio this would not make much sense. The synergies of Unreal with both the movie and theme park buisness for Disney seem like a better fit. I mean, assuming the move makes actual sense, Disney is out there talking about game collaborations and it’s not like it’s the first time they’ve spent money randomly and poorly in the gaming business. I just think the investment would make sense even if Fortnite wasn’t in the mix.

    And either way it’s being blown out of proportion by the news because they haven’t even bought the company. 1.5B is what? 10% as much as Tencent owns?


  • That is most likely going to generate less revenue than promoting donations, or a comparable amount at best. WinRAR is the meme example.

    From a PR and marketing perspective, if I wanted to maximize my revenue as a single developer I would set up a Patreon or encourage recurring donations through the software by providing bragging rights stuff (merch, insider access, early access to unfinished builds and so on). Single mandatory payments simply reproduce the piracy/license access of commercial software and shaming people into paying without coercion just makes you seem less appealing to people who would donate anyway.




  • No, hey, let me be clear, I don’t think you’re actively an ideologue, but you can absolutely disagree or actively advocate against it and still have your worldview filtered through that lens. None of us is immune to their context or their upgringing, least of all me.

    What I do say is that the notion that “it’s not free, it all comes from taxes” is a very active framing, and it comes from an anarchocapitalist perspective, whether you agree with it or not. Yes, there is a cost to public services. And yes, you do have to tax people to fund the government that is meant to provide those services, but paying taxes isn’t the same as paying for a service, and public services aren’t “services you pay with your taxes”, they’re… well, public services.

    And in the same vein, having an industry built on tipping is not sustainable and yeah, it’s a fairly (anarcho)capitalist perspective. Screw tips. You can contribute to an open source project, be it with cash, work, promotion or whatever, but you’re definitely not obligated to do so and that systemmust work within those parameters. FOSS is not software paid in tips, that’s not the point. It may be crowdsourced, but that’s not the same thing.

    So hey, I get it, you don’t ideologically support those things, consciously. If you take anything from my comment let it be that you’re still thinking about it from that framework and there are other ways to frame it. You’re right that eventually the money has to come from somewhere, but how you frame the situation impacts which somewheres you’re willing to explore.



  • If the system relies on integrity, it will fail. If it relies on shame or moral obligation it will fail. There is a reason on the other side of the fence they couldn’t root out piracy until they started providing more convenient (but more expensive) alternatives. If you rely on people feeling “obligated” to pay, they either won’t pay anyway or won’t use the software. That’s not a viable option.

    So you’re left with the other option. Whether one agrees that FOSS is “broken” or not, the only way to make the system sustainable is… well, to make it sustainable. If that means enacting political change, then that’s where the effort should go.


  • It’s not a strawman argument. My response (which wasn’t to you) was triggered by the notion that we “need to normalize paying for foss”. I don’t think that’s true, and I do think it’d lead to generating a “tipping system”. Plus, again, not what the linked article is driving at.

    I’m also not fond of “we live in a system” as an argument for playing by the system’s rules. I mean, by that metric people should just charge for access and call it a day, that’s what the “system” is encouraging. We need sustainable flows of income towards FOSS, but that doesn’t mean step one is normalizing end users feeling obligated to pay.


  • We absolutely must financially incentivize software developers. But charity is not a substitute for financing in a healthy system. The sources of financing can’t rely on badgering individuals to feel guilty for using resources in the public domain (or at least publicly available) without a voluntary contributions. I agree with the OP and the article in that the support system shouldn’t be charity. Tax evaders, redistribute wealth, provide public contributions to FOSS. We should create a sysem where FOSS is sustainable, not held up by tips like a service job in an anarchocapitalist hellscape.


  • No, it’s not, and it’s not the argument the article is making. The article is arguing for developers receiving public supoprt financed by taxing corporation who are currently evading massive amounts of money.

    This is not a case of “no one”, anyway. Throw a coffee if you can is already how this works. And it’s not just “a coffee”, plenty of openly available software has alternate revenue streams, support from corporate backers and other sustainability tools besides voluntary crowdsourcing. The OP is pondering a systemic solution, not a moral obligation based on capitalist conceptions of how much time is worth and charity.


  • I hate this argument so, so passionately.

    It’s the argument you hear from anarchocapitalists trying to argue that there are hidden costs to the res publica and thus it should be dismantled. Yes, we all have a finite amount of time. Yes, we can all quantify the cost of every single thing we do. That is a terrible way to look at things, though. There are things that are publicly available or owned by the public or in the public domain, and those things serve a purpose.

    So yeah, absolutely, if you can afford it support people who develop open software. Developing open software is absolutely a job that many people have and they do pay the bills with it. You may be able to help crowdfund it if you want to contribute and can’t do it any other way (or hey, maybe it’s already funded by corporate money, that’s also a thing). But no, you’re not a freeloader for using a thing that is publicly available while it’s publicly available. That’s some late stage capitalism crap.

    Which, in fairness, the article linked here does acknowledge and it’s coming from absolutely the right place. I absolutely agree that if you want to improve the state of people contributing to publicly available things, be it health care or software, you start by ensuring you redistribute the wealth of those who don’t contirbute to the public domain and profit disproportionately. I don’t know if that looks like UBI or not, but still, redistribution. And, again, that you can absolutely donate if you can afford it. I actually find the thought experiment of calculating the cost interesting, the extrapolation that it’s owed not so much.



  • I don’t disagree on principle, but I do think it requires some thought.

    Also, that’s still a pretty significant backstop. You basically would need models to have a way to check generated content for copyright, in the way Youtube does, for instance. And that is already a big debate, whether enforcing that requirement is affordable to anybody but the big companies.

    But hey, maybe we can solve both issues the same way. We sure as hell need a better way to handle mass human-produced content and its interactions with IP. The current system does not work and it grandfathers in the big players in UGC, so whatever we come up with should work for both human and computer-generated content.


  • That’s not “coming”, it’s an ongoing process that has been going on for a couple hundred years, and it absolutely does not require ChatGPT.

    People genuinely underestimate how many of these things have been an ongoing concern. A lot like crypto isn’t that different to what you can do with a server, “AI” isn’t a magic key that unlocks automation. I don’t even know how this mental model works. Is the idea that companies who are currently hiring millions of copywriters will just rely on automated tools? I get that yeah, a bunch of call center people may get removed (again, a process that has been ongoing for decades), but how is compensating Facebook for scrubbing their social media posts for text data going to make that happen less?

    Again, I think people don’t understand the parameters of the problem, which is different from saying that there is no problem here. If anything the conversation is a net positive in that we should have been having it in 2010 when Amazon and Facebook and Google were all-in on this process already through both ML tools and other forms of data analysis.


  • I’m gonna say those circumstances changed when digital copies and the Internet became a thing, but at least we’re having the conversation now, I suppose.

    I agree that ML image and text generation can create something that breaks copyright. You for sure can duplicate images or use copyrighted characterrs. This is also true of Youtube videos and Tiktoks and a lot of human-created art. I think it’s a fascinated question to ponder whether the infraction is in what the tool generates (i.e. did it make a picture of Spider-Man and sell it to you for money, whcih is under copyright and thus can’t be used that way) or is the infraction in the ingest that enables it to do that (i.e. it learned on pictures of Spider-Man available on the Internet, and thus all output is tainted because the images are copyrighted).

    The first option makes more sense to me than the second, but if I’m being honest I don’t know if the entire framework makes sense at this point at all.


  • A lot of this can be traced back to the invention of photography, which is a fun point of reference, if one goes to dig up the debate at the time.

    In any case, the idea that humans can only produce so fast for so long and somehow that cleans the channel just doesn’t track. We are flooded by low quality content enabled by social media. There’s seven billion of us two or three billion of those are on social platforms and a whole bunch of the content being shared in channels is created by using corporate tools to make stuff by pointing phones at it. I guarantee that people will still go to museums to look at art regardless of how much cookie cutter AI stuff gets shared.

    However, I absolutely wouldn’t want a handful of corporations to have the ability to empower their employed artists with tools to run 10x faster than freelance artists. That is a horrifying proposition. Art is art. The difficulty isn’t in making the thing technically (say hello, Marcel Duchamp, I bet you thought you had already litgated this). Artists are gonna art, but it’s important that nobody has a monopoly on the tools to make art.


  • It’s not right to say that ML output isn’t good at practical tasks. It is and it’s already in use and has been for ages. The conversation about these is guided by the relatively anecdotal fact that chatbots and image generation got good so this stuff went viral, but ML models are being used for a bunch of practical uses, from speeding up repetitive, time consuming tasks (e.g. cleaning up motion capture, facial modelling or lip animation in games and movies) or specialized tasks (so much science research is using ML tools these days).

    Now, a lot of those are done using fully owned datasets, but not all, and the ramifications there are also important. People dramatically overestimate the impact of trash product flooding channels (which is already the case, as you say) and dramatically underestimate the applications of the underlying tech beyond the couple of viral apps they only got access to recently.


  • Yep. The effect of this as currently framed is that you get data ownership clauses in EULAs forever and only major data brokers like Google or Meta can afford to use this tech at all. It’s not even a new scenario, it already happened when those exact companies were pushing facial recognition and other big data tools.

    I agree that the basics of modern copyright don’t work great with ML in the mix (or with the Internet in the mix, while we’re at it), but people are leaning on the viral negativity to slip by very unwanted consequences before anybody can make a case for good use of the tech.


  • I think viral outrage aside, there is a very open question about what constitutes fair use in this application. And I think the viral outrage misunderstands the consequences of enforcing the notion that you can’t use openly scrapable online data to build ML models.

    Effectively what the copyright argument does here is make it so that ML models are only legally allowed to make by Meta, Google, Microsoft and maybe a couple of other companies. OpenAI can say whatever, I’m not concerned about them, but I am concerned about open source alternatives getting priced out of that market. I am also concerned about what it does to previously available APIs, as we’ve seen with Twitter and Reddit.

    I get that it’s fashionable to hate on these things, and it’s fashionable to repeat the bit of misinformation about models being a copy or a collage of training data, but there are ramifications here people aren’t talking about and I fear we’re going to the worst possible future on this, where AI models are effectively ubiquitous but legally limited to major data brokers who added clauses to own AI training rights from their billions of users.