Daily Tech Digest - January 07, 2025

With o3 having reached AGI, OpenAI turns its sights toward superintelligence

One of the challenges of achieving AGI is defining it. As of yet, researchers and the broader industry do not have a concrete description of what it will be and what it will be able to do. The general consensus, though, is that AGI will possess human-level intelligence, be autonomous, have self-understanding, and will be able to “reason” and perform tasks that it was not trained to do. ... Going beyond AGI, “superintelligence” is generally understood to be AI systems that far surpass human intelligence. “With superintelligence, we can do anything else,” Altman wrote. “Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own.” He added, “this sounds like science fiction right now, and somewhat crazy to even talk about it.” However, “we’re pretty confident that in the next few years, everyone will see what we see,” he said, emphasizing the need to act “with great care” while still maximizing benefit. ... OpenAI set out to build AGI from its founding in 2015, when the concept of AGI, as Altman put it to Bloomberg, was “nonmainstream.” “We wanted to figure out how to build it and make it broadly beneficial,” he wrote in his blog post. 


Bridging the execution gap – why AI is the new frontier for corporate strategy

Imagine a future where leadership teams are not constrained by outdated processes but empowered by intelligent systems. In this world, CEOs use AI to visualise their entire organisation’s alignment, ensuring every department contributes to strategic goals. Middle managers leverage real-time insights to adapt plans dynamically, while employees understand how their work drives the company’s mission forward. Such an environment fosters resilience, innovation, and engagement. By turning strategy into a living, breathing entity, organisations can adapt to challenges and seize opportunities faster than ever before. The road to this future is not without challenges. Leaders must embrace cultural change, invest in the right technologies, and commit to continuous learning. But the rewards – a thriving, agile organisation capable of navigating the complexities of the modern business landscape – are well worth the effort. The execution gap has plagued organisations for decades, but the tools to overcome it are now within reach. AI is more than a technological advancement; it is the key to unlocking the full potential of corporate strategy. By embracing adaptability and leveraging AI’s transformative capabilities, businesses can ensure their strategies do not just survive but thrive in the face of change.


Google maps the future of AI agents: Five lessons for businesses

Google argues that AI agents represent a fundamental departure from traditional language models. While models like GPT-4o or Google’s Gemini excel at generating single-turn responses, they are limited to what they’ve learned from their training data. AI agents, by contrast, are designed to interact with external systems, learn from real-time data and execute multi-step tasks. “Knowledge [in traditional models] is limited to what is available in their training data,” the paper notes. “Agents extend this knowledge through the connection with external systems via tools.” This difference is not just theoretical. Imagine a traditional language model tasked with recommending a travel itinerary. ... At the heart of an AI agent’s capabilities is its cognitive architecture, which Google describes as a framework for reasoning, planning and decision-making. This architecture, known as the orchestration layer, allows agents to process information in cycles, incorporating new data to refine their actions and decisions. Google compares this process to a chef preparing a meal in a busy kitchen. The chef gathers ingredients, considers the customer’s preferences and adapts the recipe as needed based on feedback or ingredient availability. Similarly, an AI agent gathers data, reasons about its next steps and adjusts its actions to achieve a specific goal.


AI agents will change work forever. Here's how to embrace that transformation

The business world is full of orthodoxies, beliefs that no one questions because they are thought to be "just the way things are". One such orthodoxy is the phrase: "Our people are the difference". A simple Google search can attest to its popularity. Some companies use this orthodoxy as their official or unofficial tagline, a tribute to their employees that they hope sends the right message internally and externally. They hope their employees feel special and customers take this orthodoxy as proof of their human goodness. Other firms use this orthodoxy as part of their explanation of what makes their company different. It's part of their corporate story. It sounds nice, caring, and positive. The only problem is that this orthodoxy is not true. ... Another way to put this is that individual employees are not fixed assets. They do not behave the same way in all conditions. In most cases, employees are adaptable and can absorb and respond to change. The environment, conditions, and potential for relationships cause this capacity to express itself. So, on the one hand, one company's employees are the same as any other company's employees in the same industry. They move from company to company, read the same magazines, attend similar conventions, and learn the same strategies and processes.


Gen AI is transforming the cyber threat landscape by democratizing vulnerability hunting

Identifying potential vulnerabilities is one thing, but writing exploit code that works against them requires a more advanced understanding of security flaws, programming, and the defense mechanisms that exist on the targeted platforms. ... This is one area where LLMs could make a significant impact: bridging the knowledge gap between junior bug hunters and experienced exploit writers. Even generating new variations of existing exploits to bypass detection signatures in firewalls and intrusion prevention systems is a notable development, as many organizations don’t deploy available security patches immediately, instead relying on their security vendors to add detection for known exploits until their patching cycle catches up. ... “AI tools can help less experienced individuals create more sophisticated exploits and obfuscations of their payloads, which aids in bypassing security mechanisms, or providing detailed guidance for exploiting specific vulnerabilities,” NiČ›escu said. “This, indeed, lowers the entry barrier within the cybersecurity field. At the same time, it can also assist experienced exploit developers by suggesting improvements to existing code, identifying novel attack vectors, or even automating parts of the exploit chain. This could lead to more efficient and effective zero-day exploits.”


GDD: Generative Driven Design

The independent and unidirectional relationship between agentic platform/tool and codebase that defines the Doctor-Patient strategy is also the greatest limiting factor of this strategy, and the severity of this limitation has begun to present itself as a dead end. Two years of agentic tool use in the software development space have surfaced antipatterns that are increasingly recognizable as “bot rot” — indications of poorly applied and problematic generated code. Bot rot stems from agentic tools’ inability to account for, and interact with, the macro architectural design of a project. These tools pepper prompts with lines of context from semantically similar code snippets, which are utterly useless in conveying architecture without a high-level abstraction. Just as a chatbot can manifest a sensible paragraph in a new mystery novel but is unable to thread accurate clues as to “who did it”, isolated code generations pepper the codebase with duplicated business logic and cluttered namespaces. With each generation, bot rot reduces RAG effectiveness and increases the need for human intervention. Because bot rotted code requires a greater cognitive load to modify, developers tend to double down on agentic assistance when working with it, and in turn rapidly accelerate additional bot rotting.


Someone needs to make AI easy

Few developers did a better job of figuring out how to effectively use AI than Simon Willison. In his article “Things we learned about LLMs in 2024,” he simultaneously susses out how much happened in 2024 and why it’s confusing. For example, we’re all told to aggressively use genAI or risk falling behind, but we’re awash in AI-generated “slop” that no one really wants to read. He also points out that LLMs, although marketed as the easy path to AI riches for all who master them, are actually “chainsaws disguised as kitchen knives.” He explains that “they look deceptively simple to use … but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.” If anything, this quagmire got worse in 2024. Incredibly smart people are building incredibly sophisticated systems that leave most developers incredibly frustrated by how to use them effectively.  ... Some of this stems from the inability to trust AI to deliver consistent results, but much of it derives from the fact that we keep loading developers up with AI primitives (similar to cloud primitives like storage, networking, and compute) that force them to do the heavy lifting of turning those foundational building blocks into applications.


Making the most of cryptography, now and in the future

The mathematicians and cryptographers who have worked on these NIST algorithms expect them to last a long time. Thousands of people have already tried to poke holes into them and haven’t yet made any meaningful progress toward defeating them. So, they are “probably” OK for the time being. But as much as we would like to, we cannot mathematically rule out that they cannot be broken. This means that for commercial enterprises looking to migrate to new cryptography, they should be braced to change again and again — whether that is in five years, 10 years, or 50 years. ... Up until now most cryptography was mostly implicit and not under direct control of the management. Putting more controls around cryptography would not only safeguard data today, but it would provide the foundation to make the next transition easier. ... Cryptography is full of single points of failure. Even if your algorithm is bulletproof, you might end up with a faulty implementation. Agility helps us move away from these single points of failure, allowing us to adapt quickly if an algorithm is compromised. It is therefore crucial for CISOs to start thinking about agility and redundancy.


Data 2025 outlook: AI drives a renaissance of data

Though not all the technology building blocks are in place, many already are. Using AI to crawl and enrich metadata? Automatically generate data pipelines? Using regression analysis to flag data and model drift? Using entity extraction to flag personally identifiable information or summarize the content of structured or unstructured data? Applying machine learning to automate data quality resolution and data classification? Applying knowledge graphs to RAG? You get the idea. There are a few technology gaps that we expect will be addressed in 2025, including automating the correlation between data and model lineage, assessing the utility and provenance of unstructured data, and simplifying generation of vector embeddings. We expect in the coming year that bridging data file and model lineage will become commonplace with AI governance tools and services. And we’ll likely look to emerging approaches such as data observability to transform data quality practices from reactive to proactive. Let’s start with governance. In the data world, this is hardly a new discipline. Though data governance over the years has drawn more lip service than practice, for structured data, the underlying technologies for managing data quality, privacy, security and compliance are arguably more established than for AI. 


Beware the Rise of the Autonomous Cyber Attacker

Research has already shown that teams of AIs working together can find and exploit zero-day vulnerabilities. A team at the University of Illinois Urbana-Champaign created a “task force” of AI agents that worked as a supervised unit and effectively exploited vulnerabilities they had no prior knowledge of. In a recent report, OpenAI also cited three threat actors that used ChatGPT to discover vulnerabilities, research targets, write and debug malware and setup command and control infrastructure. The company said the activity offered these groups “limited, incremental (new) capabilities” to carry out malicious cyber tasks. ... “Darker” AI use has, in part, prompted many of today’s top thinkers to support regulations. This year, OpenAI CEO Sam Altman said: “I’m not interested in the killer robots walking on the street … things going wrong. I’m much more interested in the very subtle societal misalignments, where we just have these systems out in society and through no particular ill intention, things go horribly wrong.” ... Theoretically, regulation may reduce unintended or dangerous use among legitimate users, but I’m certain that the criminal economy will appropriate this technology. As CISOs deploy AI more broadly, attackers’ abilities will concurrently soar.



Quote for the day:

"Leadership is a dynamic process that expresses our skill, our aspirations, and our essence as human beings." -- Catherine Robinson-Walker

No comments:

Post a Comment