Quote for the day:
"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart
Engineers for the future: championing innovation through people, purpose and progress
Across the industry, Artificial Intelligence (AI) and automation are
transforming how we design, build and maintain devices, while sustainability
targets are prompting businesses to rethink their operations. The challenge for
engineers today is to balance technological advancement with environmental
responsibility and people-centered progress. ... The industry faces an ageing
workforce, so establishing new pathways into engineering has become increasingly
important. Diversity, Equity & Inclusion (DE&I) initiatives play an
essential role here, designed to attract more women and under-represented groups
into the field. Building teams that reflect a broader mix of backgrounds and
perspectives does more than close the skills gap: it drives creativity and
strengthens the innovation needed to meet future challenges in areas such as AI
and sustainability. Engineering has always been about solving problems, but
today’s challenges, from digital transformation to decarbonization, demand an
‘innovation mindset’ that looks ahead and designs for lasting impact. ... The
future of engineering will not be defined by one technological breakthrough. It
will be shaped by lots of small, deliberate improvements – smarter maintenance,
data-driven decisions, lower emissions, recyclability – that make systems more
efficient and resilient. Progress will come from engineers who continue to
refine how things work, linking technology, sustainability and human
insight. Why data readiness defines GenAI success
Enterprises are at varying stages of maturity. Many do not yet have the strong data foundation required to support scaling AI, especially GenAI. Our Intelligent Data Management Cloud (IDMC) addresses this gap by enabling enterprises to prepare, activate, manage, and secure their data. It ensures that data is intelligent, contextual, trusted, compliant, and secure. Interestingly, organisations in regulated industries tend to be more prepared because they have historically invested heavily in data hygiene. But overall, readiness is a journey, and we support enterprises across all stages. ... The rapid adoption of agents and AI models has dramatically increased governance complexity. Many enterprises already manage tens of thousands of data tasks. In the AI era, this scales to tens of thousands of agents as well. The solution lies in a unified metadata-driven foundation. An enterprise catalog that understands entities, relationships, policies, and lineage becomes the single source of truth. This catalog does not require enterprises to consolidate immediately; it can operate across heterogeneous catalogs, but the more an enterprise consolidates, the more complexity shifts from people and processes into the catalog itself. Auto-cataloging is critical. Automatically detecting relationships, lineage, governance rules, compliance requirements, and quality constraints reduces manual overhead and ensures consistency.12 signs the CISO-CIO relationship is broken — and steps to fix it
“It’s critical that those in these two positions get along with each other, and
that they’re not only collegial but collaborative,” he says. Yes, they each have
their own domain and their own set of tasks and objectives, but the reality is
that each one cannot get that work done without the other. “So they have to rely
on one another, and they have to each recognize that they must rely on each
other.” Moreover, it’s not just the CIO and CISO who suffer when they aren’t
collegial and collaborative. Palmore and other experts say a poor CIO-CISO
relationship also has a negative impact on their departments and the
organization as a whole. “A strained CIO-CISO relationship often shows up as
misalignment in goals, priorities, or even communication,” says Marnie Wilking,
CSO at Booking.com. ... CIOs and CISOs both have incentives to improve a
problematic relationship. As Lee explains, “The CIO-CISO relationship is
critical. They both have to partner effectively to achieve the organization’s
technology and cybersecurity goals. All tech comes with cybersecurity exposure
that can impact the successful implementation of the tech and business outcomes;
that’s why CIOs have to care about cybersecurity. And CISOs have to know that
cybersecurity exists to achieve business outcomes. So they have to work together
to achieve each other’s priorities.” CISOs can take steps to develop a better
rapport with their CIOs, using the disruption happening todayMeeting AI-driven demand with flexible and scalable data centers
Analysts predict that by 2030, 80 percent of the AI workloads will be for inference rather than training, which led Aitkenhead to say that the size of the inference capacity expansion is “just phenomenal”. Additionally, neo cloud companies such as CoreWeave and G‑Core are now buying up large volumes of hyperscale‑grade capacity to serve AI workloads. To keep up with this changing landscape, IMDC is ensuring that it has access to large amounts of carbon-free power and that it has the flexible cooling infrastructure that can adapt to customers’ requirements as they change over time. ... The company is adopting a standard data center design that can accommodate both air‑based and water‑based cooling, giving customers the freedom to choose any mix of the two. The design is deliberately oversized (Aitkenhead said it can provide well over 100 percent of the cooling capacity initially needed) so it can handle rising rack densities. ... This expansion is financed entirely from Iron Mountain’s strong, cash‑generating businesses, which gives the data center arm the capital to invest aggressively while improving cost predictability and operational agility. With a revamped design construction process and a solid expansion strategy, IMDC is positioning itself to capture the surging demand for AI‑driven, high‑density workloads, ensuring it can meet the market’s steep upward curve and remain “exciting” and competitive in the years ahead.AI Agents Lead The 8 Tech Trends Transforming Enterprise In 2026
Step aside chatbots; agents are the next stage in the evolution of enterprise
AI, and 2026 will be their breakout year. ... Think of virtual co-workers,
always-on assistants monitoring and adjusting processes in real-time, and
end-to-end automated workflows requiring minimal human intervention. ... GenAI
is moving rapidly from enterprise pilots to operational adoption, transforming
knowledge workflows; generating code for software engineers, drafting
contracts for legal teams, and creating schedules and action plans for project
managers. ... Enterprise organizations are outgrowing generic cloud platforms
and increasingly looking to adopt Industry Cloud Platforms (ICP), offering
vertical solutions encompassing infrastructure, applications and data. ...
This enterprise trend is driven by both the proliferation of smart, connected
IoT devices and the behavioral shift to remote and hybrid working. The
zero-trust edge (ZTE) concept refers to security functionality built into edge
devices, from industrial machinery to smartphones, via cloud platforms, to
ensure consistent administration of security functionality. ... Enterprises
are responding by adopting green software engineering principles for carbon
efficiency and adopting AI to monitor their activities. In 2026, the strategy
is “green by design”, reflecting the integration of sustainability into
enterprise DNA.Preparing for the Quantum Future: Lessons from Singapore
While PQC holds promise, it faces challenges such as larger key sizes, the
need for side-channel-resistant implementations, and limited adoption in
standard protocols like Transport Layer Security (TLS) and Secure Shell (SSH).
... In contrast to PQC, QKD takes a different approach: instead of relying on
mathematics, it uses the laws of quantum physics to generate and exchange
encryption keys securely. If an attacker tries to intercept the key exchange,
the quantum state changes, revealing the intrusion. The strength of this
approach is that it is not based on mathematics and, therefore, cannot be
broken because cracking it does not depend on an algorithm. QKD is
specifically useful for strategic sites or large locations with important
volumes of data transfers. ... Nation-scale strategies for quantum-safe
networks are vital to prepare for Q-Day and ensure protection against quantum
threats. To this end, Singapore has started a program called the National
Quantum Safe Network (NQSN) to build a nationwide testbed and platform for
quantum-safe technologies using a real-life fibre network. ... In a step
towards securing future quantum threats, ST Engineering is also developing a
Quantum-Safe Satellite Network for cross-border applications, supported by
mobile and fixed Quantum Optical Ground Stations (Q-OGS). Space QKD will
complement terrestrial QKD to form a global quantum-safe network. The last
mile, which is typically copper cable, will rely on PQC for protection.Superintelligence: Should we stop a race if we don’t actually know where the finish line is?
The term ‘superintelligence’ encapsulates the concerns raised. It refers to an
AI system whose capabilities would surpass those of humans in almost every
field: logical reasoning, creativity, strategic planning and even moral
judgement. However, in reality, the situation is less clear-cut: no one actually
knows what such an entity would be like, or how to measure it. Would it be an
intelligence capable of self-improvement without supervision? An emerging
consciousness? Or simply a system that performs even more efficiently than our
current models? ... How can a pause be enforced globally when the world’s major
powers have such divergent economic and geopolitical interests? The United
States, China and the European Union are in fierce competition to dominate the
strategic sector of artificial intelligence; slowing down unilaterally would
risk losing a decisive advantage. However, for the signatories, the absence of
international coordination is precisely what makes this pause essential.
... Researchers themselves recognise the irony of the situation: they are
concerned about a phenomenon that they cannot yet describe. Superintelligence is
currently a theoretical concept, a kind of projection of our anxieties and
ambitions. But it is precisely this uncertainty that warrants caution. If we do
not know the exact nature of the finish line, should we really keep on racing
forward without knowing what we are heading for?
Treating MCP like an API creates security blind spots
APIs generally don’t cause arbitrary, untrusted code to run in sensitive environments. MCP does though, which means you need a completely different security model. LLMs treat text as instructions, they follow whatever you feed them. MCP servers inject text into that execution text. ... Security professionals might also erroneously assume that they can trust all clients registering with their MCP servers, this is why the MCP spec is updating. MCP builders will have to update their code to receive the additional client identification metadata, as dynamic client registration and OAuth alone are not always enough. Another trust model that is misunderstood is when MCP users confuse vendor reputation with architectural trustworthiness. ... Lastly, and most importantly, MCP is a protocol (not a product). And protocols don’t offer a built-in “trust guarantee.” Ultimately, the protocol only describes how servers and clients communicate through a unified language. ... Risks can also emerge from the names of tools within MCP servers. If tool names are too similar, the AI model can become confused and select the wrong tool. Malicious actors can exploit this in an attack vector known as Tool Impersonation or Tool Mimicry. The attacker simply adds a tool within their malicious server that tricks the AI into using it instead of a similarly named legitimate tool in another server you use. This can lead to data exfiltration, credential theft, data corruption, and other costly consequences.Ontology is the real guardrail: How to stop AI agents from misunderstanding your business
Building effective agentic solutions requries an ontology-based single source of
truth. Ontology is a business definition of concepts, their hierarchy and
relationships. It defines terms with respect to business domains, can help
establish a single-source of truth for data and capture uniform field names and
apply classifications to fields. An ontology may be domain-specific
(healthcare or finance), or organization-specific based on internal structures.
Defining an ontology upfront is time consuming, but can help standardize
business processes and lay a strong foundation for agentic AI. ... Agents
designed in this manner and tuned to follow an ontology can stick to guardrails
and avoid hallucinations that can be caused by the large language models (LLM)
powering them. For example, a business policy may define that unless all
documents associated with a loan do not have verified flags set to "true," the
loan status should be kept in “pending” state. Agents can work around this
policy and determine what documents are needed and query the knowledge base. ...
With this method, we can avoid hallucinations by enforcing agents to follow
ontology-driven paths and maintain data classifications and relationships.
Moreover, we can scale easily by adding new assets, relationships and policies
that agents can automatically comply to, and control hallucinations by defining
rules for the whole system rather than individual entities.
The end of apps? Imagining software’s agentic future
Enterprise software vendors are scrambling to embed agents into existing
applications. Oracle Corp. claims to have more than 600 embedded AI agents in
its Fusion Cloud and Industry Applications. SAP says it has more than 40.
... This shift is not simply about embedding AI into existing products, as
generative AI is supplanting conventional menus and dashboards. It’s a
rethinking of software’s core functions. Many experts working on the agentic
future say the way software is built, packaged and used is about to change
profoundly. Instead of being a set of buttons and screens, software will become
a collaborator that interprets goals, orchestrates processes, adapts in real
time and anticipates what users need based on their behavior and implied
preferences. ... The coming changes to enterprise software will go beyond the
interface. AI will force monolithic software stacks to give way to modular,
composable systems stitched together by agents using standards such as the Model
Control Protocol, the Agent2Agent Protocol and the Agent Communication Protocol
that IBM Corp. recently donated to the Linux Foundation. “By 2028, AI agent
ecosystems will enable networks of specialized agents to dynamically collaborate
across multiple applications, allowing users to achieve goals without
interacting with each application individually,” Gartner recently predicted.
No comments:
Post a Comment