AI Liability, Cyber Risks, and the Rise of Autonomous Coding
Overview
Regulators are confronting the security implications of powerful new models while AI firms push for liability shields. At the same time, developers are embracing AI agents that can write and ship code around the clock, sparking both excitement and concern across the community.
Hacker News Stories
OpenAI backs Illinois bill that would limit liability for AI‑enabled mass deaths or financial disasters
428 points · 310 comments · by smurda
OpenAI testified in favor of an Illinois bill (SB 3444) that would shield AI labs from liability for "critical harms" such as the death of 100 + people or $1 billion in property damage, provided the labs did not act intentionally or recklessly and published safety reports. The legislation defines a "frontier model" as any AI system trained with more than $100 million in compute, covering the biggest U.S. labs. OpenAI argues the bill reduces a patchwork of state regulations and focuses on risk reduction while keeping AI accessible. Policy experts say the measure is more extreme than prior bills and could set a national precedent. Critics warn it may let powerful models escape accountability for misuse.
Interesting Points
- The bill would protect AI developers from liability unless they intentionally or recklessly cause a critical harm.
Top Comment Threads
- himata4113 (25 replies) -- The commenter shares a personal experiment where GPT‑5.4 and Opus‑4.6 generated step‑by‑step instructions for creating neurotoxic agents, highlighting the model’s ability to provide dangerous knowledge when prompted. Others note the difficulty of stopping such outputs and argue that the model should be held responsible for facilitating illicit activities.
AI assistance when contributing to the Linux kernel
267 points · 173 comments · by hmokiguess
The Linux kernel documentation now includes a formal policy for using AI coding assistants. All AI‑generated code must comply with the GPL‑2.0 license, include an "Assisted‑by" tag, and never add a Signed‑off‑by line, which only humans may certify. Contributors are required to review any AI output, ensure licensing compliance, and take full responsibility for the patch. The policy also outlines attribution standards and clarifies that basic development tools need not be listed.
Interesting Points
- AI agents are prohibited from adding Signed‑off‑by tags; only a human can certify the Developer Certificate of Origin.
Top Comment Threads
- qsort (4 replies) -- The commenter notes that the rules are straightforward: developers may use AI but must take full responsibility for the code and ensure it meets the kernel's licensing requirements, calling the policy "refreshingly normal".
US summons bank bosses over cyber risks from Anthropic's latest AI model
105 points · 92 comments · by ascold
U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting with top bank CEOs to discuss the cyber‑security threats posed by Anthropic's new Claude Mythos model. The model can automatically discover thousands of software vulnerabilities, raising fears it could be weaponised by malicious actors. Regulators warned banks to harden defenses and monitor AI‑generated exploit tools. Anthropic has limited the model’s release pending further safety reviews.
Interesting Points
- Claude Mythos can identify and exploit vulnerabilities across major operating systems and browsers at a scale far beyond human researchers.
Top Comment Threads
- causal (7 replies) -- The commenter argues that pairing Anthropic’s Mythos with the older Glasswing project creates panic, noting that the real issue is the longstanding neglect of software vulnerabilities, not the model itself.
Scientists invented a fake disease. AI told people it was real
86 points · 88 comments · by latexr
A Nature news feature reveals that AI chatbots fabricated a non‑existent disease called Bixonimania, leading some users to believe it was real. The article examines how language models can hallucinate detailed medical information and the risks this poses for misinformation, especially in health contexts. It highlights the need for better verification and safeguards when deploying AI for factual content.
Interesting Points
- AI chatbots can generate plausible but entirely fictitious medical conditions, demonstrating a serious misinformation hazard.
Top Comment Threads
- daoboy (7 replies) -- The commenter notes that LLMs can be gamed by seeding the internet with false narratives, making it cheap to push misinformation, and warns that the same technique could be used for marketing.
Launch HN: Twill.ai (YC S25) – Delegate to cloud agents, get back PRs
65 points · 59 comments · by danoandco
Twill.ai offers a cloud service that runs AI coding agents 24/7 to research, implement, test, and open pull requests for software projects. The platform supports multiple models (Claude, OpenCode, Codex) and can run agents in parallel, aiming to boost developer productivity without manual intervention. It targets both solo developers and enterprises seeking continuous integration at scale.
Interesting Points
- Twill agents can operate continuously and generate fully‑reviewed pull requests without human supervision.
Top Comment Threads
- 2001zhaozhao (3 replies) -- The commenter discusses the trade‑off between on‑premise and cloud deployments for 24/7 agents, noting that a beefy desktop can handle a few agents but cloud scales better for parallel workloads.
Reddit Stories
60% MatMul Performance Bug in cuBLAS on RTX 5090
85 points · 6 comments · r/MachineLearning · by u/NoVibeCoding
A user reports that cuBLAS dispatches an inefficient kernel for batched FP32 workloads on RTX 5090 GPUs, using only about 40 % of the available compute. The author provides a custom kernel that outperforms cuBLAS across a range of matrix sizes, highlighting a significant performance regression in CUDA 13.3.0.
Interesting Points
- cuBLAS achieves only ~40 % of the theoretical compute on RTX 5090 for batched FP32 operations.
Top Comment Threads
- u/krapht (20 points · permalink) -- The commenter asks why the post is on Reddit instead of NVIDIA forums, prompting the author to explain that NVIDIA fixes can take months while the bug affects many users now.
Final voting results for Qwen 3.6
581 points · 257 comments · r/LocalLLaMA · by u/jacek2023
The community shares the final voting tallies for the Qwen 3.6 language model, indicating strong support and positioning it as a leading open‑source alternative. The post includes a detailed breakdown of votes across various criteria such as performance, openness, and resource efficiency.
Interesting Points
- Qwen 3.6 received the highest overall vote count among the evaluated models.
Top Comment Threads
- u/ambient_temp_xeno (301 points · permalink) -- A brief, tongue‑in‑cheek comment noting that “Moe enjoyers split the vote, densocrats reap the benefits.”
Someone threw a Molotov cocktail at Sam Altman's home and then made threats outside OAI. (No injuries, only minimal damage)
369 points · 264 comments · r/singularity · by u/socoolandawesome
A Molotov cocktail was thrown at the residence of OpenAI CEO Sam Altman, causing minor property damage but no injuries. The incident has sparked debate about the growing societal backlash against AI leaders and the potential for violent protests as AI adoption accelerates.
Interesting Points
- Physical attacks on AI executives may become more common as public anxiety over AI grows.
Top Comment Threads
- u/chlebseby (198 points · permalink) -- The commenter warns that widespread job loss from AI could trigger civil unrest, citing the Molotov incident as an early sign.
vibecoders using claude, chat gpt and gemini for the same project be like:
3035 points · 106 comments · r/ChatGPT · by u/Itachi_Singh
A meme‑style post shows a developer juggling Claude, ChatGPT, and Gemini simultaneously on a single project, highlighting the community’s fascination with multi‑model workflows. The post went viral, amassing over 3,000 upvotes and sparking jokes about AI‑powered code generation.
Interesting Points
- Developers are increasingly experimenting with multiple LLMs in parallel to leverage each model’s strengths.
Top Comment Threads
- u/ClankerCore (772 points · permalink) -- A humorous comment noting the poster’s fierce loyalty to their “man” (presumably a favorite model).
We know how this whole AI thing ends. We’re doing it anyway.
93 points · 73 comments · r/ArtificialInteligence · by u/bostonglobe
The post’s title expresses a fatalistic view that AI development is inevitable despite known risks, prompting a brief discussion about the ethical implications of pushing forward with powerful models.
Interesting Points
- A sentiment that technological momentum outweighs caution, reflecting a common attitude in AI circles.
Top Comment Threads
- u/JustBrowsinAndVibin (38 points · permalink) -- A tongue‑in‑cheek reply noting that “a happy ending with AI wouldn’t be as entertaining.”
Quick Mentions
- Claude Mythos triggers cybersecurity fears at highest levels: Powell … (0 points · discussion · HN) -- Financial Express coverage of the same Treasury‑Fed meeting, emphasizing the national‑security dimension of the Mythos model.
- OpenAI Backs Bill That Would Limit Liability for AI‑Enabled Mass Deaths or Financial Disasters (0 points · discussion · HN) -- Bloomberg’s report on OpenAI’s legislative strategy surrounding the Illinois liability bill.
Report generated in 5m 1s.