Once. They do not have the ability to learn or adapt on their own. They are created by humans through “deep learning”, but that is fundamentally different from continuously learning based on one’s own actions and experiences.
Once. They do not have the ability to learn or adapt on their own. They are created by humans through “deep learning”, but that is fundamentally different from continuously learning based on one’s own actions and experiences.
We are prediction machines, but nothing like chatgpt. Current AI has no ability to learn, adapt, or even consider the future.
Well I upvoted the post so that people will see the comments!
You managed to get your money back?! How?
I think that’s an american thing. Besides, that money is long gone since I made the purchase several years ago.
I asked for a refund when they kept delaying shipment of my Librem 5. I was simply denied and that was it. They told me I could still choose to receive the phone, but I don’t want it since it’s a bad, practically useless product now.
I reported them in my country for it.
I reply to people on lemmy on a case-by-case basis. I decide how to eat food on a case-by-case basis. But if you give me a deck of cards and tell me to shuffle them, I generally do not decide how to shuffle on a case-by-case basis; it doesn’t matter whose cards they are.
That’s not what case-by-case means. Wiktionary:
Separate and distinct from others of the same kind; treated individually.
Case-by-case implies that each treatment is different and is not generalisable; but the fact that they use a patient’s own tissue does not make each individual treatment different. If you want to extend the logic, you might call vaccination a case-by-case treatment as well, since they use different needles for each person.
it was done on a case-by-case basis. Each person has their own therapy tailored for them. This does not appear to be a mass-solution.
I’m not sure what you are expecting for something to be considered a cure? What they are describing is a treatment procedure which uses the patient’s own tissue. How does that make it case-by-case?
It can at least get one unstuck, past an indecision paralysis, or give an outline of an idea. It can also be useful for searching though data.
If this works, it’s noteworthy. I don’t know if similar results have been achieved before because I don’t follow developments that closely, but I expect that biological computing is going to catch a lot more attention in the near-to-mid-term future. Because of the efficiency and increasingly tight constraints imposed on humans due to environmental pressure, I foresee it eventually eclipse silicon-based computing.
FinalSpark says its Neuroplatform is capable of learning and processing information
They sneak that in there as if it’s just a cool little fact, but this should be the real headline. I can’t believe they just left it at that. Deep learning can not be the future of AI, because it doesn’t facilitate continuous learning. Active inference is a term that will probably be thrown about a lot more in the coming months and years, and as evidenced by all kinds of living things around us, wetware architectures are highly suitable for the purpose of instantiating agents doing active inference.
I don’t know about google because I don’t use it unless I really can’t find what I’m looking for, but here’s a quick ddg search with a very unambiguous and specific question, and from sampling only the top 9 results I see 2 that are at all relevant (2nd and 5th):
In order to answer my question, I need to first mentally filter out 7/9 of the results visible on my screen, then open both of the relevant ones in new tabs and read through lengthy discussions in order to find out if anyone has shared a proper solution.
Here is the same search using perplexity’s default model (not pro, which is a lot better at breaking down queries and including relevant references):
and I don’t have to verify all the details because even if some of it is wrong, it is immediately more useful information to me.
I want to re-emphasise though that using LLMs for this can be incredibly frustrating too, because they will often insist assertively on falsehoods and generally act really dumb, so I’m not saying there aren’t pros and cons. Sometimes a simple keyword-based search and manual curation of the results is preferred to the nonsense produced by a stupid language model.
Edit: I didn’t answer your question about malicious, but I can give some example of what I consider malicious and you may agree that it happens frequently enough:
etc.
Maybe I can share some insight into why one might want to.
I hate searching the internet. It’s a massive mental drain for me to try figure out how I should put my problem into words that others with similar ideas will have done before me - it’s my mental processing power wasted on purely linguistic overhead instead of trying to understand and learn about the problem.
I hate the (dis-/mis-)informational assault I open myself to by skimming through the results, because the majority of them will be so laughably irrelevant, if not actively malicious, that I become a slightly worse person every time I expose myself.
And I hate visiting websites. Not only because of all the reasons modern websites suck, but because even if they are a delight in UX, they are distracting me from what I really want, which is (most of the time) information, not to experience someone’s idiosyncratic, artistic ideas for how to organise and present data, or how to keep me ‘engaged’.
So yes, I prefer stupid a language model that will lie about facts half the time and bastardise half my prompts if it means I can glance a bit of what the internet has to say about something, because I can more easily spot plausible bullshit and discard it or quickly check its veracity than I can magic my vague problem into a suitable query only to sift through more ignorance, hostility, and implausible bullshit conjured by internet randos instead.
And yes, LLMs really do suck even in their domain of speciality (language - because language serves a purpose, and they do not understand it), and they are all kinds of harmful, dangerous, and misused. Given how genuinely ignorant people are of what an LLM really is and what it is really doing, I think it’s irresponsible to embed one the way google has.
I think it’s probably best to… uhh… sort of gatekeep this tech so that it’s mostly utilised by people who understand the risks. But capitalism is incompatible with niches and bespoke products, so every piece of tech has to be made with absolutely everyone as a target audience.
We’re all living in amerikka
koka kola
santa klaus
I don’t remember encountering the particular bug they’re describing. I was hoping it was about the behaviour of drag-and-dropping something into the browser, such as with those “drop a file here to upload”. I am often simply unable to make that work because instead of the thing being dropped into the webpage’s element, it opens the file in the browser instead, which is not really something I ever want to do.
That is a different car brand, though?
I finally got around to restarting my system after adding hardware.nvidia.modesetting.enable = true;
to my NixOS config and it works perfectly! Thank you for the suggestion. I likely wouldn’t have figured that out on my own any time soon.
Thanks for the suggestion. sudo cat /sys/module/nvidia_drm/parameters/modeset
indeed prints N
, so I’ll try adding that to my system config.
I think the Xorg vs Wayland situation is not too dissimilar to that of Windows vs Linux. Lots of people are waiting for all of their games/software work (just as well or better) on Linux before switching. I believe that in most cases, switching to Linux requires that a person goes out of their way to either find alternatives to the software they use or altogether change the way they use their computer. It’s a hard sell for people who only use their computer to get their work done, and that’s why it is almost exclusively developers, tech-curious, idealists, government workers, and grandparents who switch to Linux (thanks to a family member who falls into any subset of the former categories). It may require another generation (of people) for X11 to be fully deprecated, because even amongst Linux users there are those who are not interested in changing their established workflow.
I do think it’s unreasonable to expect everything to work the same when a major component is being replaced. Some applications that are built with X11 in mind will never be ported/adapted to work on Wayland. It’s likely that for some things, no alternatives are ever going to exist.
Good news is that we humans are complex adaptive systems! Technology is always changing - that’s just the way of it. Sometimes that will lead to perceived loss of functionality, reduction in quality, or impeded workflow in the name of security, resource efficiency, moral/political reasons, or other considerations. Hopefully we can learn to accept such change, because that’ll be a virtue in times to come.
(This isn’t to say that it’s acceptable for userspace to be suddenly broken because contributors thought of a more elegant way to write underlying software. Luckily, X11 isn’t being deprecated anytime soon for just this reason.)
Ok I’m done rambling.
At the very least it doesn’t handle spoilers correctly