Showing posts with label superintelligence. Show all posts
Showing posts with label superintelligence. Show all posts

Daily Tech Digest - December 01, 2025


Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart



Engineers for the future: championing innovation through people, purpose and progress

Across the industry, Artificial Intelligence (AI) and automation are transforming how we design, build and maintain devices, while sustainability targets are prompting businesses to rethink their operations. The challenge for engineers today is to balance technological advancement with environmental responsibility and people-centered progress. ... The industry faces an ageing workforce, so establishing new pathways into engineering has become increasingly important. Diversity, Equity & Inclusion (DE&I) initiatives play an essential role here, designed to attract more women and under-represented groups into the field. Building teams that reflect a broader mix of backgrounds and perspectives does more than close the skills gap: it drives creativity and strengthens the innovation needed to meet future challenges in areas such as AI and sustainability. Engineering has always been about solving problems, but today’s challenges, from digital transformation to decarbonization, demand an ‘innovation mindset’ that looks ahead and designs for lasting impact. ... The future of engineering will not be defined by one technological breakthrough. It will be shaped by lots of small, deliberate improvements – smarter maintenance, data-driven decisions, lower emissions, recyclability – that make systems more efficient and resilient. Progress will come from engineers who continue to refine how things work, linking technology, sustainability and human insight. 


Why data readiness defines GenAI success

Enterprises are at varying stages of maturity. Many do not yet have the strong data foundation required to support scaling AI, especially GenAI. Our Intelligent Data Management Cloud (IDMC) addresses this gap by enabling enterprises to prepare, activate, manage, and secure their data. It ensures that data is intelligent, contextual, trusted, compliant, and secure. Interestingly, organisations in regulated industries tend to be more prepared because they have historically invested heavily in data hygiene. But overall, readiness is a journey, and we support enterprises across all stages. ... The rapid adoption of agents and AI models has dramatically increased governance complexity. Many enterprises already manage tens of thousands of data tasks. In the AI era, this scales to tens of thousands of agents as well. The solution lies in a unified metadata-driven foundation. An enterprise catalog that understands entities, relationships, policies, and lineage becomes the single source of truth. This catalog does not require enterprises to consolidate immediately; it can operate across heterogeneous catalogs, but the more an enterprise consolidates, the more complexity shifts from people and processes into the catalog itself. Auto-cataloging is critical. Automatically detecting relationships, lineage, governance rules, compliance requirements, and quality constraints reduces manual overhead and ensures consistency. 


12 signs the CISO-CIO relationship is broken — and steps to fix it

“It’s critical that those in these two positions get along with each other, and that they’re not only collegial but collaborative,” he says. Yes, they each have their own domain and their own set of tasks and objectives, but the reality is that each one cannot get that work done without the other. “So they have to rely on one another, and they have to each recognize that they must rely on each other.” Moreover, it’s not just the CIO and CISO who suffer when they aren’t collegial and collaborative. Palmore and other experts say a poor CIO-CISO relationship also has a negative impact on their departments and the organization as a whole. “A strained CIO-CISO relationship often shows up as misalignment in goals, priorities, or even communication,” says Marnie Wilking, CSO at Booking.com. ... CIOs and CISOs both have incentives to improve a problematic relationship. As Lee explains, “The CIO-CISO relationship is critical. They both have to partner effectively to achieve the organization’s technology and cybersecurity goals. All tech comes with cybersecurity exposure that can impact the successful implementation of the tech and business outcomes; that’s why CIOs have to care about cybersecurity. And CISOs have to know that cybersecurity exists to achieve business outcomes. So they have to work together to achieve each other’s priorities.” CISOs can take steps to develop a better rapport with their CIOs, using the disruption happening today


Meeting AI-driven demand with flexible and scalable data centers

Analysts predict that by 2030, 80 percent of the AI workloads will be for inference rather than training, which led Aitkenhead to say that the size of the inference capacity expansion is “just phenomenal”. Additionally, neo cloud companies such as CoreWeave and G‑Core are now buying up large volumes of hyperscale‑grade capacity to serve AI workloads. To keep up with this changing landscape, IMDC is ensuring that it has access to large amounts of carbon-free power and that it has the flexible cooling infrastructure that can adapt to customers’ requirements as they change over time. ... The company is adopting a standard data center design that can accommodate both air‑based and water‑based cooling, giving customers the freedom to choose any mix of the two. The design is deliberately oversized (Aitkenhead said it can provide well over 100 percent of the cooling capacity initially needed) so it can handle rising rack densities. ... This expansion is financed entirely from Iron Mountain’s strong, cash‑generating businesses, which gives the data center arm the capital to invest aggressively while improving cost predictability and operational agility. With a revamped design construction process and a solid expansion strategy, IMDC is positioning itself to capture the surging demand for AI‑driven, high‑density workloads, ensuring it can meet the market’s steep upward curve and remain “exciting” and competitive in the years ahead.


AI Agents Lead The 8 Tech Trends Transforming Enterprise In 2026

Step aside chatbots; agents are the next stage in the evolution of enterprise AI, and 2026 will be their breakout year. ... Think of virtual co-workers, always-on assistants monitoring and adjusting processes in real-time, and end-to-end automated workflows requiring minimal human intervention. ... GenAI is moving rapidly from enterprise pilots to operational adoption, transforming knowledge workflows; generating code for software engineers, drafting contracts for legal teams, and creating schedules and action plans for project managers. ... Enterprise organizations are outgrowing generic cloud platforms and increasingly looking to adopt Industry Cloud Platforms (ICP), offering vertical solutions encompassing infrastructure, applications and data. ... This enterprise trend is driven by both the proliferation of smart, connected IoT devices and the behavioral shift to remote and hybrid working. The zero-trust edge (ZTE) concept refers to security functionality built into edge devices, from industrial machinery to smartphones, via cloud platforms, to ensure consistent administration of security functionality. ... Enterprises are responding by adopting green software engineering principles for carbon efficiency and adopting AI to monitor their activities. In 2026, the strategy is “green by design”, reflecting the integration of sustainability into enterprise DNA.


Preparing for the Quantum Future: Lessons from Singapore

While PQC holds promise, it faces challenges such as larger key sizes, the need for side-channel-resistant implementations, and limited adoption in standard protocols like Transport Layer Security (TLS) and Secure Shell (SSH). ... In contrast to PQC, QKD takes a different approach: instead of relying on mathematics, it uses the laws of quantum physics to generate and exchange encryption keys securely. If an attacker tries to intercept the key exchange, the quantum state changes, revealing the intrusion. The strength of this approach is that it is not based on mathematics and, therefore, cannot be broken because cracking it does not depend on an algorithm. QKD is specifically useful for strategic sites or large locations with important volumes of data transfers. ... Nation-scale strategies for quantum-safe networks are vital to prepare for Q-Day and ensure protection against quantum threats. To this end, Singapore has started a program called the National Quantum Safe Network (NQSN) to build a nationwide testbed and platform for quantum-safe technologies using a real-life fibre network. ... In a step towards securing future quantum threats, ST Engineering is also developing a Quantum-Safe Satellite Network for cross-border applications, supported by mobile and fixed Quantum Optical Ground Stations (Q-OGS). Space QKD will complement terrestrial QKD to form a global quantum-safe network. The last mile, which is typically copper cable, will rely on PQC for protection.


Superintelligence: Should we stop a race if we don’t actually know where the finish line is?

The term ‘superintelligence’ encapsulates the concerns raised. It refers to an AI system whose capabilities would surpass those of humans in almost every field: logical reasoning, creativity, strategic planning and even moral judgement. However, in reality, the situation is less clear-cut: no one actually knows what such an entity would be like, or how to measure it. Would it be an intelligence capable of self-improvement without supervision? An emerging consciousness? Or simply a system that performs even more efficiently than our current models? ... How can a pause be enforced globally when the world’s major powers have such divergent economic and geopolitical interests? The United States, China and the European Union are in fierce competition to dominate the strategic sector of artificial intelligence; slowing down unilaterally would risk losing a decisive advantage. However, for the signatories, the absence of international coordination is precisely what makes this pause essential.  ... Researchers themselves recognise the irony of the situation: they are concerned about a phenomenon that they cannot yet describe. Superintelligence is currently a theoretical concept, a kind of projection of our anxieties and ambitions. But it is precisely this uncertainty that warrants caution. If we do not know the exact nature of the finish line, should we really keep on racing forward without knowing what we are heading for?


Treating MCP like an API creates security blind spots

APIs generally don’t cause arbitrary, untrusted code to run in sensitive environments. MCP does though, which means you need a completely different security model. LLMs treat text as instructions, they follow whatever you feed them. MCP servers inject text into that execution text. ... Security professionals might also erroneously assume that they can trust all clients registering with their MCP servers, this is why the MCP spec is updating. MCP builders will have to update their code to receive the additional client identification metadata, as dynamic client registration and OAuth alone are not always enough.  Another trust model that is misunderstood is when MCP users confuse vendor reputation with architectural trustworthiness. ... Lastly, and most importantly, MCP is a protocol (not a product). And protocols don’t offer a built-in “trust guarantee.” Ultimately, the protocol only describes how servers and clients communicate through a unified language. ... Risks can also emerge from the names of tools within MCP servers. If tool names are too similar, the AI model can become confused and select the wrong tool. Malicious actors can exploit this in an attack vector known as Tool Impersonation or Tool Mimicry. The attacker simply adds a tool within their malicious server that tricks the AI into using it instead of a similarly named legitimate tool in another server you use. This can lead to data exfiltration, credential theft, data corruption, and other costly consequences. 


Ontology is the real guardrail: How to stop AI agents from misunderstanding your business

Building effective agentic solutions requries an ontology-based single source of truth. Ontology is a business definition of concepts, their hierarchy and relationships. It defines terms with respect to business domains, can help establish a single-source of truth for data and capture uniform field names and apply classifications to fields. An ontology may be domain-specific (healthcare or finance), or organization-specific based on internal structures. Defining an ontology upfront is time consuming, but can help standardize business processes and lay a strong foundation for agentic AI. ... Agents designed in this manner and tuned to follow an ontology can stick to guardrails and avoid hallucinations that can be caused by the large language models (LLM) powering them. For example, a business policy may define that unless all documents associated with a loan do not have verified flags set to "true," the loan status should be kept in “pending” state. Agents can work around this policy and determine what documents are needed and query the knowledge base. ... With this method, we can avoid hallucinations by enforcing agents to follow ontology-driven paths and maintain data classifications and relationships. Moreover, we can scale easily by adding new assets, relationships and policies that agents can automatically comply to, and control hallucinations by defining rules for the whole system rather than individual entities. 


The end of apps? Imagining software’s agentic future

Enterprise software vendors are scrambling to embed agents into existing applications. Oracle Corp. claims to have more than 600 embedded AI agents in its Fusion Cloud and Industry Applications. SAP says it has more than 40.  ... This shift is not simply about embedding AI into existing products, as generative AI is supplanting conventional menus and dashboards. It’s a rethinking of software’s core functions. Many experts working on the agentic future say the way software is built, packaged and used is about to change profoundly. Instead of being a set of buttons and screens, software will become a collaborator that interprets goals, orchestrates processes, adapts in real time and anticipates what users need based on their behavior and implied preferences. ... The coming changes to enterprise software will go beyond the interface. AI will force monolithic software stacks to give way to modular, composable systems stitched together by agents using standards such as the Model Control Protocol, the Agent2Agent Protocol and the Agent Communication Protocol that IBM Corp. recently donated to the Linux Foundation. “By 2028, AI agent ecosystems will enable networks of specialized agents to dynamically collaborate across multiple applications, allowing users to achieve goals without interacting with each application individually,” Gartner recently predicted.

Daily Tech Digest - October 25, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


The day the cloud went dark

This week, the impossible happened—again. Amazon Web Services, the backbone of the digital economy and the world’s largest cloud provider, suffered a large-scale outage. If you work in IT or depend on cloud services, you didn’t need a news alert to know something was wrong. Productivity ground to a halt, websites failed to load, business systems stalled, and the hum of global commerce was silenced, if only for a few hours. The impact was immediate and severe, affecting everything from e-commerce giants to startups, including my own consulting business. ... Some businesses hoped for immediate remedies from AWS’s legendary service-level agreements. Here’s the reality: SLA credits are cold comfort when your revenue pipeline is in freefall. The truth that every CIO has faced at least once is that even industry-leading SLAs rarely compensate for the true cost of downtime. They don’t make up for lost opportunities, damaged reputations, or the stress on your teams. ... This outage is a wake-up call. Headlines will fade, and AWS (and its competitors) will keep promising ever-improving reliability. Just don’t forget the lesson: No matter how many “nines” your provider promises, true business resilience starts inside your own walls. Enterprises must take matters into their own hands to avoid existential risk the next time lightning strikes.


Application Modernization Pitfalls: Don't Let Your Transformation Fail

Modernizing legacy applications is no longer a luxury — it’s a strategic imperative. Whether driven by cloud adoption, agility goals, or technical debt, organizations are investing heavily in transformation. Yet, for all its potential, many modernization projects stall, exceed budgets, or fail to deliver the expected business value. Why? The transition from a monolithic legacy system to a flexible, cloud-native architecture is a complex undertaking that involves far more than just technology. It's a strategic, organizational, and cultural shift. And that’s where the pitfalls lie. ... Application modernization is not just a technical endeavor — it’s a strategic transformation that touches every layer of the organization. From legacy code to customer experience, from cloud architecture to compliance posture, the ripple effects are profound. Yet, the most overlooked ingredient in successful modernization isn’t technology — it’s leadership: Leadership that frames modernization as a business enabler, not a cost center; Leadership that navigates complexity with clarity, acknowledging legacy constraints while championing innovation; Leadership that communicates with empathy, recognizing that change is hard and adoption is earned, not assumed. Modernization efforts fail not because teams lack skill, but because they lack alignment. 


CIOs will be on the hook for business-led AI failures

While some business-led AI projects include CIO input, AI experts have seen many organizations launch AI projects without significant CIO or IT team support. When other departments launch AI projects without heavy IT involvement, they may underestimate the technical work needed to make the projects successful, says Alek Liskov, chief AI officer at data refinery platform provider Datalinx AI. ... “Start with the tech folks in the room first, before you get much farther,” he says. “I still see many organizations where there’s either a disconnect between business and IT, or there’s lack of speed on the IT side, or perhaps it’s just a lack of trust.” Despite the doubts, IT leaders need to be involved from the beginning of all AI projects, adds Bill Finner, CIO at large law firm Jackson Walker. “AI is just another technology to add to the stack,” he says. “Better to embrace it and help the business succeed then to sit back and watch from the bench.” ... “It’s a great opportunity for CIOs to work closely with all the practice areas both on the legal and business professional side to ensure we’re educating everyone on the capabilities of the applications and how they can enhance their day-to-day workflows by streamlining processes,” Finner says. “CIOs love to help the business succeed, and this is just another area where they can show their value.”


Three Questions That Help You Build a Better Software Architecture

You don’t want to create an architecture for a product that no one needs. And in validating the business ideas, you will test assumptions that drive quality attributes like scalability and performance needs. To do this, the MVP has to be more than a Proof of Concept - it needs to be able to scale well enough and perform well enough to validate the business case, but it does not need to answer all questions about scalability and performance ... yet. ... Achieving good performance while scaling can also mean reworking parts of the solution that you’ve already built; solutions that perform well with a few users may break down as load is increased. On the other hand, you may never need to scale to the loads that cause those failures, so overinvesting too early can simply be wasted effort. Many scaling issues also stem from a critical bottleneck, usually related to accessing a shared resource. Spotting these early can inform the team about when, and under what conditions, they might need to change their approach. ... One of the most important architectural decisions that teams must make is to decide how they will know that technical debt has risen too far for the system to be supportable and maintainable in the future. The first thing they need to know is how much technical debt they are actually incurring. One way they can do this is by recording decisions that incur technical debt in their Architectural Decision Record (ADR).


Ransomware recovery perils: 40% of paying victims still lose their data

Decryptors are frequently slow and unreliable, John adds. “Large-scale decryption across enterprise environments can take weeks and often fails on corrupted files or complex database systems,” he explains. “Cases exist where the decryption process itself causes additional data corruption.” Even when decryptor tools are supplied, they may contain bugs, or leave files corrupted or inaccessible. Many organizations also rely on untested — and vulnerable — backups. Making matters still worse, many ransomware victims discover that their backups were also encrypted as part of the attack. “Criminals often use flawed or incompatible encryption tools, and many businesses lack the infrastructure to restore data cleanly, especially if backups are patchy or systems are still compromised,” says Daryl Flack, partner at UK-based managed security provider Avella Security and cybersecurity advisor to the UK Government. ... “Setting aside funds to pay a ransom is increasingly viewed as problematic,” Tsang says. “While payment isn’t illegal in itself, it may breach sanctions, it can fuel further criminal activity, and there is no guarantee of a positive outcome.” A more secure legal and strategic position comes from investing in resilience through strong security measures, well-tested recovery plans, clear reporting protocols, and cyber insurance, Tsang advises.


In IoT Security, AI Can Make or Break

Ironically, the same techniques that help defenders also help attackers. Criminals are automating reconnaissance, targeting exposed protocols common in IoT, and accelerating exploitation cycles. Fortinet recently highlighted a surge in AI-driven automated scanning (tens of thousands of scans per second), where IoT and Session Initiation Protocol (SIP) endpoints are probed earlier in the kill chain. That scale turns "long-tail" misconfigurations into early footholds. Worse, AI itself is susceptible to attack. Adversarial ML (machine learning) can blind or mislead detection models, while prompt injection and data poisoning can repurpose AI assistants connected to physical systems. ... Move response left. Anomaly detection without orchestration just creates work. It's important to pre-stage responses such as quarantine VLANs, Access Control List (ACL) updates, Network Access Control (NAC) policies, and maintenance window tickets. This way, high-confidence detections contain first and ask questions second. Finally, run purple-team exercises that assume AI is the target and the tool. This includes simulating prompt injection against your assistants and dashboards; simulating adversarial noise against your IoT Intrusion Detection System (IDS); and testing whether analysts can distinguish "model weirdness" from real incidents under time pressure.


Cyber attack on Jaguar Land Rover estimated to cost UK economy £1.9 billion

Most of the estimated losses stem from halted vehicle production and reduced manufacturing output. JLR’s production reportedly dropped by around 5,000 vehicles per week during the shutdown, translating to weekly losses of approximately £108 million. The shock has cascaded across hundreds of suppliers and service providers. Many firms have faced cash-flow pressures, with some taking out emergency loans. To mitigate the fallout, JLR has reportedly cleared overdue invoices and issued advance payments to critical suppliers. ... The CMC’s Technical Committee urged businesses and policymakers to prioritise resilience against operational disruptions, which now pose the greatest financial risk from cyberattacks. The committee recommended identifying critical digital assets, strengthening segmentation between IT and operational systems, and ensuring robust recovery plans. It also called on manufacturers to review supply-chain dependencies and maintain liquidity buffers to withstand prolonged shutdowns. Additionally, it advised insurers to expand cyber coverage to include large-scale supply chain disruption, and urged the government to clarify criteria for financial support in future systemic cyber incidents.


Thinking Machines challenges OpenAI's AI scaling strategy: 'First superintelligence will be a superhuman learner'

To illustrate the problem with current AI systems, Rafailov offered a scenario familiar to anyone who has worked with today's most advanced coding assistants. "If you use a coding agent, ask it to do something really difficult — to implement a feature, go read your code, try to understand your code, reason about your code, implement something, iterate — it might be successful," he explained. "And then come back the next day and ask it to implement the next feature, and it will do the same thing." The issue, he argued, is that these systems don't internalize what they learn. "In a sense, for the models we have today, every day is their first day of the job," Rafailov said. ... "Think about how we train our current generation of reasoning models," he said. "We take a particular math problem, make it very hard, and try to solve it, rewarding the model for solving it. And that's it. Once that experience is done, the model submits a solution. Anything it discovers—any abstractions it learned, any theorems—we discard, and then we ask it to solve a new problem, and it has to come up with the same abstractions all over again." That approach misunderstands how knowledge accumulates. "This is not how science or mathematics works," he said. ... The objective would fundamentally change: "Instead of rewarding their success — how many problems they solved — we need to reward their progress, their ability to learn, and their ability to improve."


Demystifying Data Observability: 5 Steps to AI-Ready Data

Data observability ensures data pipelines capture representative data, both the expected and the messy. By continuously measuring drift, outliers, and unexpected changes, observability creates the feedback loop that allows AI/ML models to learn responsibly. In short, observability is not an add-on; it is a foundational practice for AI-ready data. ... Rather than relying on manual checks after the fact, observability should be continuous and automated. This turns observability from a reactive safety net into a proactive accelerator for trusted data delivery. As a result, every new dataset or transformation can generate metadata about quality, lineage, and performance, while pipelines can include regression tests and alerting as standard practice. ... The key is automation. Rather than policies that sit in binders, observability enables policies as code. In this way, data contracts and schema checks that are embedded in pipelines can validate that inputs remain fit for purpose. Drift detection routines, too, can automatically flag when training data diverges from operational realities while governance rules, from PII handling to lineage, are continuously enforced, not applied retroactively. ... It’s tempting to measure observability in purely technical terms such as the number of alerts generated, data quality scores, or percentage of tables monitored. But the real measure of success is its business impact. Rather than numbers, organizations should ask if it resulted in fewer failed AI deployments. 


AI heavyweights call for end to ‘superintelligence’ research

Superintelligence isn’t just hype. It’s a strategic goal determined by a privileged few, and backed by hundreds of billions of dollars in investment, business incentives, frontier AI technology, and some of the world’s best researchers. ... Human intelligence has reshaped the planet in profound ways. We have rerouted rivers to generate electricity and irrigate farmland, transforming entire ecosystems. We have webbed the globe with financial markets, supply chains, air traffic systems: enormous feats of coordination that depend on our ability to reason, predict, plan, innovate and build technology. Superintelligence could extend this trajectory, but with a crucial difference. People will no longer be in control. The danger is not so much a machine that wants to destroy us, but one that pursues its goals with superhuman competence and indifference to our needs. Imagine a superintelligent agent tasked with ending climate change. It might logically decide to eliminate the species that’s producing greenhouse gases. ... For years, efforts to manage AI have focused on risks such as algorithmic bias, data privacy, and the impact of automation on jobs. These are important issues. But they fail to address the systemic risks of creating superintelligent autonomous agents. The focus has been on applications, not the ultimate stated goal of AI companies to create superintelligence.