Deep Learning Weekly: Issue 426
This week in deep learning, we bring you Introducing Gemini Enterprise, A small number of samples can poison LLMs of any size, and a paper on Self-Adapting Language Models.
You may also enjoy Microsoft AI’s MAI-Image-1, Agents 2.0: From Shallow Loops to Deep Agents, a paper on Making, not Taking, the Best of N, and more!
As always, happy reading and hacking. If you have something you think should be in next week’s issue, find us on Twitter: @dl_weekly.
Until next week!
Industry
Google introduced Gemini Enterprise, a complete, AI-optimized platform that includes a no-code workbench, a centralized government framework, as well as various integrations to existing business applications.
Introducing MAI-Image-1, debuting in the top 10 on LMArena
Microsoft AI announced MAI-Image-1, their first image generation model developed entirely in-house, debuting in the top 10 text-to-image models on LMArena.
Salesforce announces Agentforce 360 as enterprise AI competition heats up
Salesforce announced the latest version of Agentforce 360, which includes new ways to instruct, build, and deploy AI agents.
Kernel raises $22M to power browser infrastructure for AI agents
Kernel has raised $22 million in funding to scale its platform so AI agents can reliably navigate, persist, and use the web.
MLOps & LLMOps
Agents 2.0: From Shallow Loops to Deep Agents
An architectural post about the shift from “Shallow Agents” to “Deep Agents” that utilize explicit planning, sub-agents, and persistent memory to solve complex, multi-step problems.
Rearchitecting Letta’s Agent Loop: Lessons from ReAct, MemGPT, & Claude Code
A technical post detailing the rearchitecture of Letta’s agent loop, transitioning from older models like MemGPT to a V1 design leveraging modern LLM capabilities such as native reasoning.
Securing your agents with authentication and authorization
An article on securing agents by implementing authentication and authorization (AuthN/AuthZ), addressing their dynamic access needs.
Learning
A small number of samples can poison LLMs of any size \ Anthropic
An article about data-poisoning attacks shows that as few as 250 malicious documents can backdoor LLMs of any size, challenging the assumption that attackers need to control a percentage of training data.
A strategic blog post analyzing the high costs and risks of upgrading vector embedding models at scale, offering a decision framework that balances cutting-edge performance with stability and operational constraints.
Towards a Typology of Strange LLM Chains-of-Thought
A post outlining six ...
This excerpt is provided for preview purposes. Full article content is available on the original publication.