The Architect’s Guide to Understanding Agentic AI

All business processes can be broken down into two planes: a control plane and
  a tools plane. See the graphic below. The tools plane is a collection of APIs,
  stored procedures and external web calls to business partners. However, for
  organizations that have started their AI journey, it could also include calls
  to traditional machine learning models (wave No. 1) and LLMs (wave No. 2)
  operating in “one-shot” mode. ... The promise of agentic AI is to use LLMs
  with full knowledge of an organization’s tools plane and allow them to build
  and execute the logic needed for the control plane. This can be done by
  providing a “few-shot” prompt to an LLM that has been fine-tuned on an
  organization’s tools plane. Below is an example of a “few-shot” prompt that
  answers the same hypothetical question presented earlier. This is also known
  as letting the LLM think slowly. ... If agentic AI still seems to be made up
  of too much magic, then consider the simple example below. Every developer who
  has to write code daily probably asks an LLM a question similar to the one
  below. ... Agentic AI is the next logical evolution of AI. It is based on
  capabilities with a solid footing in AI’s first and second waves. The promise
  is the use of AI to solve more complex problems by allowing them to plan,
  execute tasks and revise— in other words, allowing them to think slowly. This
  also promises to produce more accurate responses.
AI datacenters putting zero emissions promises out of reach
Datacenters' use of water and land are other bones of contention, which in
  combination with their reliance on tax breaks and the limited number of local
  jobs they deliver, will see them face growing opposition from local residents
  and environmental groups. Uptime highlights that many governments have set
  targets for GHG emissions to become net-zero by a set date, but warns that
  because the AI boom look set to test power availability, it will almost
  certainly put these pledges out of reach. ... Many governments seem convinced
  of the economic benefits promised by AI at the expense of other concerns, the
  report notes. The UK is a prime example, this week publishing the AI
  Opportunities Action Plan and vowing to relax planning rules to prioritize
  datacenter builds. ... Increasing rack power presents several challenges, the
  report warns, including the sheer space taken up by power distribution
  infrastructure such as switchboards, UPS systems, distribution boards, and
  batteries. Without changes to the power architecture, many datacenters risk
  becoming an electrical plant built around a relatively small IT room. Solving
  this will call for changes such as medium-voltage (over 1 kV) distribution to
  the IT space and novel power distribution topologies. However, this overhaul
  will take time to unfold, with 2025 potentially a pivotal year for investment
  to make this possible.
State of passkeys 2025: passkeys move to mainstream

One of the critical factors driving passkeys into mainstream is the full
  passkey-readiness of devices, operating systems and browsers. Apple (iOS,
  macOS, Safari), Google (Android, Chrome) and Microsoft (Windows, Edge) have
  fully integrated passkey support across their platforms: Over 95 percent of
  all iOS & Android devices are passkey-ready; and Over 90 percent of all
  iOS & Android devices have passkey functionality enabled. With Windows
  soon supporting synced passkeys, all major operating systems ensure users can
  securely and effortlessly access their credentials across devices. ... With
  full device support, a polished UX, growing user familiarity, and a proven
  track record among early adopter implementations, there’s no reason for
  businesses to delay adopting passkeys. The business advantages of passkeys are
  compelling. Companies that previously relied on SMS-based authentication can
  save considerably on SMS costs. Beyond that, enterprises adopting passkeys
  benefit from reduced support overhead (since fewer password resets are
  needed), lower risk of breaches (thanks to phishing-resistance), and optimized
  user flows that improve conversion rates. Collectively, these perks make a
  convincing business case for passkeys.
Balancing usability and security in the fight against identity-based attacks
AI and ML are a double-edged sword in cybersecurity. On one hand,
  cybercriminals are using these technologies to make their attacks faster and
  wiser. They can create highly convincing phishing emails, generate deepfake
  content, and even find ways to bypass traditional security measures. For
  example, generative AI can craft emails or videos that look almost real,
  tricking people into falling for scams. On the flip side, AI and ML are also
  helping defenders. These technologies allow security systems to quickly
  analyze vast amounts of data, spotting unusual behavior that might indicate
  compromised credentials. ... Targeted security training can be useful but
  generally you want to reduce the human dependency as much as possible. This is
  why controls that can meet a user where they are at is critical. If you can
  deliver point-in-time guidance, or straight up technically prevent something
  like a user entering their password into a phishing site, it significantly
  reduces the dependency on the human to make the right decision unassisted
  every time. When you consider how hard it can be for even security
  professionals to spot the more sophisticated phishing sites, it’s essential
  that we help people out as much as possible with technical controls.
Understanding Leaderless Replication for Distributed Data
Leaderless replication is another fundamental replication approach for
  distributed systems. It alleviates problems of multi-leader replication while,
  at the same time, it introduces its own problems. Write conflicts in
  multi-leader replication are tackled in leaderless replication with
  quorum-based writes and systematic conflict resolution. Cascading failures,
  synchronization overhead, and operational complexity can be handled in
  leaderless replication via its decentralized architecture. Removing leaders
  can simplify cluster management, failure handling,g and recovery mechanisms.
  Any replica can handle writes/reads. ... Direct writes, and coordination-based
  replication are the most common approaches in leaderless replication. In the
  first approach, clients write directly to node replicas, while in the second
  approach, there exist coordinator-mediated writes. It is worth mentioning
  that, unlike the leader-follower concept, coordinators in leaderless
  replication do not enforce a particular ordering of writes. ... Failure
  handling is one of the most challenging aspects of both approaches. While
  direct writes provide better theoretical availability, they can be problematic
  during failure scenarios. Coordinator-based systems can provide clearer
  failure semantics but at the cost of potential coordinator bottlenecks.
Blockchain in Banking: Use Cases and Examples
Bitcoin has entered a space usually reserved for gold and sovereign bonds:
  national reserves. While the U.S. Federal Reserve maintains that it cannot
  hold Bitcoin under current regulations, other financial systems are paying
  close attention to its potential role as a store of value. On the global
  stage, Bitcoin is being viewed not just as a speculative asset but as a hedge
  against inflation and currency volatility. Governments are now debating
  whether digital assets can sit alongside gold bars in their vaults. Behind all
  this activity lies blockchain - providing transparency, security, and a
  framework for something as ambitious as a digital reserve currency. ...
  Financial assets like real estate, investment funds, or fine art are
  traditionally expensive, hard to divide, and slow to transfer. Blockchain
  changes this by converting these assets into digital tokens, enabling
  fractional ownership and simplifying transactions. UBS launched its first
  tokenized fund on the Ethereum blockchain, allowing investors to trade fund
  shares as digital assets. This approach reduces administrative costs,
  accelerates settlements, and improves accessibility for investors.
  Additionally, one of Central and Eastern Europe’s largest banks has tokenized
  fine art on Aleph Zero blockchain. This enables fractional ownership of
  valuable art pieces while maintaining verifiable proof of ownership and
  authenticity.
Decentralized AI in Edge Computing: Expanding Possibilities

Federated learning enables decentralized training of AI models directly across
  multiple edge devices. This approach eliminates the need to transfer raw data
  to a central server, preserving privacy and reducing bandwidth consumption.
  Models are trained locally, with only aggregated updates shared to improve the
  global system. ... Localized data processing empowers edge devices to conduct
  real-time analytics, facilitating faster decision-making and minimizing
  reliance on central frameworks. This capability is fundamental for
  applications such as autonomous vehicles and industrial automation, where even
  milliseconds can be vital. ... Blockchain technology is pivotal in
  decentralized AI for edge computing by providing a secure, immutable ledger
  for data sharing and task execution across edge nodes. It ensures transparency
  and trust in resource allocation, model updates, and data verification
  processes. ... By processing data directly at the edge, decentralized AI
  removes the delays in sending data to and from centralized servers. This
  capability ensures faster response times, enabling near-instantaneous
  decision-making in critical real-time applications. ... Decentralized AI
  improves privacy protocols by empowering the processing of sensitive
  information locally on the device rather than sending it to external
  servers.
The Myth of Machine Learning Reproducibility and Randomness
The nature of ML systems contributes to the challenge of reproducibility. ML
  components implement statistical models that provide predictions about some
  input, such as whether an image is a tank or a car. But it is difficult to
  provide guarantees about these predictions. As a result, guarantees about the
  resulting probabilistic distributions are often given only in limits, that is,
  as distributions across a growing sample. These outputs can also be described
  by calibration scores and statistical coverage, such as, “We expect the true
  value of the parameter to be in the range [0.81, 0.85] 95 percent of the
  time.” ... There are two basic techniques we can use to manage
  reproducibility. First, we control the seeds for every randomizer used. In
  practice there may be many. Second, we need a way to tell the system to
  serialize the training process executed across concurrent and distributed
  resources. Both approaches require the platform provider to include this sort
  of support. ... Despite the importance of these exact reproducibility modes,
  they should not be enabled during production. Engineering and testing should
  use these configurations for setup, debugging and reference tests, but not
  during final development or operational testing.
The High-Stakes Disconnect For ICS/OT Security

ICS technologies, crucial to modern infrastructure, are increasingly targeted
  in sophisticated cyber-attacks. These attacks, often aimed at causing
  irreversible physical damage to critical engineering assets, highlight the
  risks of interconnected and digitized systems. Recent incidents like TRISIS,
  CRASHOVERRIDE, Pipedream, and Fuxnet demonstrate the evolution of cyber
  threats from mere nuisances to potentially catastrophic events, orchestrated
  by state-sponsored groups and cybercriminals. These actors target not just
  financial gains but also disruptive outcomes and acts of warfare, blending
  cyber and physical attacks. Additionally, human-operated Ransomware and
  targeted ICS/OT ransomware pose concerns being on the rise in recent times.
  ... Traditional IT security measures, when applied to ICS/OT environments, can
  provide a false sense of security and disrupt engineering operations and
  safety. Thus, it is important to consider and prioritize the SANS Five ICS
  Cybersecurity Critical Controls. This freely available whitepaper sets forth
  the five most relevant critical controls for an ICS/OT cybersecurity strategy
  that can flex to an organization's risk model and provides guidance for
  implementing them.
Execs are prioritizing skills over degrees — and hiring freelancers to fill gaps

Companies are adopting more advanced approaches to assessing potential and
  current employee skills, blending AI tools with hands-on evaluations,
  according to Monahan. AI-powered platforms are being used to match candidates
  with roles based on their skills, certifications, and experience. “Our
  platform has done this for years, and our new UMA (Upwork’s Mindful AI)
  enhances this process,” she said. Gartner, however, warned that “rapid skills
  evolutions can threaten quality of hire, as recruiters struggle to ensure
  their assessment processes are keeping pace with changing skills. Meanwhile,
  skills shortages place more weight on new hires being the right hires, as
  finding replacement talent becomes increasingly challenging. Robust appraisal
  of candidate skills is therefore imperative, but too many assessments can lead
  to candidate fatigue.” ... The shift toward skills-based hiring is further
  driven by a readiness gap in today’s workforce. Upwork’s research found that
  only 25% of employees feel prepared to work effectively alongside AI, and even
  fewer (19%) can proactively leverage AI to solve problems. “As companies
  navigate these challenges, they’re focusing on hiring based on practical,
  demonstrated capabilities, ensuring their workforce is agile and equipped to
  meet the demands of a rapidly evolving business landscape,” Monahan said.
Quote for the day:
“If you set your goals ridiculously
    high and it’s a failure, you will fail above everyone else’s success.”
    -- James Cameron
 
 
No comments:
Post a Comment