Yeah, this is actually a pretty great application for AI. It’s local, privacy-preserving and genuinely useful for an underserved demographic.
One of the most wholesome and actually useful applications for LLMs/CLIP that I’ve seen.
Yeah, this is actually a pretty great application for AI. It’s local, privacy-preserving and genuinely useful for an underserved demographic.
One of the most wholesome and actually useful applications for LLMs/CLIP that I’ve seen.
(6.9-4.2)/(2024-2018) = 0.45 “version increments” per year.
4.2/(2018-1991) = 0.15 “version increments” per year.
So, the pace of version increases in the past 6 years has been around triple the average from the previous 27 years, since Linux’ first release.
I guess I can see why 6.9 would seem pretty dramatic for long-time Linux users.
I wonder whether development has actually accelerated, or if this is just a change in the approach to the release/versioning process.
Tbf 500ms latency on - IIRC - a loopback network connection in a test environment is a lot. It’s not hugely surprising that a curious engineer dug into that.
I don’t think it’s necessarily a bad thing that an AI got it wrong.
I think the bigger issue is why the AI model got it wrong. It got the diagnosis wrong because it is a language model and is fundamentally not fit for use as a diagnostic tool. Not even a screening/aid tool for physicians.
There are AI tools designed for medical diagnoses, and those are indeed a major value-add for patients and physicians.
Precisely. Many of the narrowly scoped solutions work really well, too (for what they’re advertised for).
As of today though, they’re nowhere near reliable enough to replace doctors, and any breakthrough on that front is very unlikely to be a language model IMO.
Exactly. So the organisations creating and serving these models need to be clearer about the fact that they’re not general purpose intelligence, and are in fact contextual language generators.
I’ve seen demos of the models used as actual diagnostic aids, and they’re not LLMs (plus require a doctor to verify the result).
There are some very impressive AI/ML technologies that are already in use as part of existing medical software systems (think: a model that highlights suspicious areas on an MRI, or even suggests differential diagnoses). Further, other models have been built and demonstrated to perform extremely well on sample datasets.
Funnily enough, those systems aren’t using language models 🙄
(There is Google’s Med-PaLM, but I suspect it wasn’t very useful in practice, which is why we haven’t heard anything since the original announcement.)
It is quite terrifying that people think these unoriginal and inaccurate regurgitators of internet knowledge, with no concept of or heuristic for correctness… are somehow an authority on anything.
I know of at least one other case in my social network where GPT-4 identified a gas bubble in someone’s large bowel as “likely to be an aggressive malignancy.” Leading to said person fully expecting they’d be dead by July, when in fact they were perfectly healthy.
These things are not ready for primetime, and certainly not capable of doing the stuff that most people think they are.
The misinformation is causing real harm.
I saw a job posting for Senior Software Engineer position at a large tech company (not Big Tech, but high profile and widely known) which required candidates to have “an excellent academic track record, including in high school.” A lot of these requirements feel deliberately arbitrary, and like an effort to thin the herd rather than filter for good candidates.
Idk… in theory they probably don’t need to store a full copy of the page for indexing, and could move to a more data-efficient format if they do. Also, not serving it means they don’t need to replicate the data to as many serving regions.
But I’m just speculating here. Don’t know how the indexing/crawling process works at Google’s scale.
This is probably an attempt to save money on storage costs. Expect cloud storage pricing from Google to continue to rise as they reallocate spending towards ML hardware accelerators.
Never been happier to have a proper NAS setup with offsite backup 🙃
Zsh is a nice balance of modern features and backwards compatibility with bash.
Crostini is an official feature built by Google that allows you to run Linux on a tightly integrated hypervisor inside Chrome OS. You keep a lot of Chrome OS’ security benefits while getting a Linux machine to play with.
That said, no, it’s not illegal to install a different operating system on your Chromebook hardware. They are just PCs, under the hood. You might lose some hardware security features though, e.g. the capabilities provided by integration of the Titan silicon.
If you had a job at Google, corporate IT would definitely not be happy if you wiped the company-managed OS and installed an unmanaged Linux distro :)
The reddest of red flags.
Open source vulnerabilities typically stem from poorly written code
Yeah, because paid programmers never write bad closed-source code…
Sonarr and Radarr with Ombi for requests if desired. Transmission + OpenVPN for the download side.
Or you could manually rip DVDs/Blu Rays if you can still get ahold of them for the stuff you want to watch.
Did they ever satisfactorily resolve that issue, or did the media just stop covering it as aggressively? Last I heard they were trying to add solar shields to the satellites to reduce their albedo.
I’d argue the bigger moral is that you should always own your online identity. You should buy your own domain (@yourname.xyz
or something like that) and make your email on that. So if Google bans you, you just switch email providers and keep your address.
IIRC DuckDuckGo wasn’t a fan of the Australian media bargaining bill either. I suspect they will also deindex news sites in Canada should amendments not be made.
I haven’t seen the Canadian one and this is honestly the first I’ve heard of it, but the idea that a referrer has to pay a news website for directing traffic to them is ludicrous to me.
Power management is going to be a huge emerging issue with the deployment of transformer model inference to the edge.
I foresee some backpedaling from this idea that “one model can do everything”. LLMs have their place, but sometimes a good old LSTM or CNN is a better choice.