Quote for the day:
"Keep steadily before you the fact that all true success depends at last upon yourself." -- Theodore T. Hunger
You already use a software-only approach to passkey authentication - why that matters
After decades of compromises, exfiltrations, and financial losses resulting from
inadequate password hygiene, you'd think that we would have learned by now.
However, even after comprehensive cybersecurity training, research shows that
98% of users are still easily tricked into divulging their passwords to threat
actors. Realizing that hope -- the hope that users will one day fix their
password management habits -- is a futile strategy to mitigate the negative
consequences of shared secrets, the tech industry got together to invent a new
type of login credential. The passkey doesn't involve a shared secret, nor does
it require the discipline or the imagination of the end user. Unfortunately,
passkeys are not as simple to put into practice as passwords, which is why a
fair amount of education is still required. ... Passkeys still involve a secret.
But unlike passwords, users just have no way of sharing it -- not with
legitimate relying parties and especially not with threat actors. ... In most
situations where users are working with passkeys but not using one of the
platform authenticators, they'll most likely be working with a virtual
authenticator. These are essentially BYO authenticators, none of which rely on
the device's underlying security hardware for any passkey-related public key
cryptography or encryption tasks, unlike platform authenticators.Getting started with agentic AI
A working agentic AI strategy relies on AI agents connected by a metadata layer,
whereby people understand where and when to delegate certain decisions to the AI
or pass work to external contractors. It’s a focus on defining the role of the
AI and where people involved in the workflow need to contribute. ... Data
lineage tracking should happen at the code level through metadata propagation
systems that tag every data transformation, model inference and decision point
with unique identifiers. Willson says this creates an immutable audit trail that
regulatory frameworks increasingly demand. According to Willson, advanced
implementations may use blockchain-like append-only logs to ensure governance
data cannot be retroactively modified. ... One of the areas IT leaders need to
consider is that their organisation will more than likely rely on a number of AI
models to support agentic AI workflows. ... Organisations need to have the
right data strategy in place, and they should already be well ahead on their
path to full digitisation, where automation through RPA is being used to connect
many disparate workflows. Agentic AI is the next stage of this automation, where
an AI is tasked with making decisions in a way that would have previously been
too clunky using RPA. However, automation of workflows and business processes
are just pieces of an overall jigsaw. Human-centric IAM is failing: Agentic AI requires a new identity control plane
Agentic AI does not just use software; it behaves like a user. It authenticates
to systems, assumes roles and calls APIs. If you treat these agents as mere
features of an application, you invite invisible privilege creep and untraceable
actions. A single over-permissioned agent can exfiltrate data or trigger
erroneous business processes at machine speed, with no one the wiser until it is
too late. The static nature of legacy IAM is the core vulnerability. You cannot
pre-define a fixed role for an agent whose tasks and required data access might
change daily. The only way to keep access decisions accurate is to move policy
enforcement from a one-time grant to a continuous, runtime evaluation. ...
Securing this new workforce requires a shift in mindset. Each AI agent must be
treated as a first-class citizen within your identity ecosystem. First, every
agent needs a unique, verifiable identity. This is not just a technical ID; it
must be linked to a human owner, a specific business use case and a software
bill of materials (SBOM). The era of shared service accounts is over; they are
the equivalent of giving a master key to a faceless crowd. Second, replace
set-and-forget roles with session-based, risk-aware permissions. Access should
be granted just in time, scoped to the immediate task and the minimum necessary
dataset, then automatically revoked when the job is complete. Think of it as
giving an agent a key to a single room for one meeting, not the master key to
the entire building.
Don’t ignore the security risks of agentic AI
We need policy engines that understand intent, monitor behavioral drift and can
detect when an agent begins to act out of character. We need developers to
implement fine-grained scopes for what agents can do, limiting not just which
tools they use, but how, when and under what conditions. Auditability is also
critical. Many of today’s AI agents operate in ephemeral runtime environments
with little to no traceability. If an agent makes a flawed decision, there’s
often no clear log of its thought process, actions or triggers. That lack of
forensic clarity is a nightmare for security teams. In at least some cases,
models resorted to malicious insider behaviors when that was the only way to
avoid replacement or achieve their goals—including blackmailing officials and
leaking sensitive information to competitors Finally, we need robust testing
frameworks that simulate adversarial inputs in agentic workflows.
Penetration-testing a chatbot is one thing; evaluating an autonomous agent that
can trigger real-world actions is a completely different challenge. It requires
scenario-based simulations, sandboxed deployments and real-time anomaly
detection. ... Until security is baked into the development lifecycle of agentic
AI, rather than being patched on afterward, we risk repeating the same mistakes
we made during the early days of cloud computing: excessive trust in automation
before building resilient guardrails.How Technological Continuity and High Availability Strengthen IT Resilience in Critical Sectors
Within the context of business continuity, high availability ensures technology
supports the organization’s ability to operate without disruption. It minimizes
downtime and maintains the confidentiality, integrity, and availability of
information. ... To achieve true high availability, organizations implement
architectures that combine redundancy, automation, and fault tolerance. Database
replication whether synchronous or asynchronous allows data to be duplicated
across primary and secondary nodes, ensuring continuous access in the event of a
failure. Synchronous replication guarantees data consistency but introduces
latency, while asynchronous models reduce latency at the expense of a small data
gap. Both approaches, when properly configured, strengthen the integrity and
continuity of critical databases. ... One of the most effective strategies to
reduce technological dependence is the implementation of hybrid continuity
models that integrate both on-premises and cloud environments. Organizations
that rely exclusively on a single cloud service provider expose themselves to
the risk of total outage if that provider experiences downtime or disruption. By
maintaining mirrored environments between cloud infrastructure and local
servers, it is possible to achieve operational flexibility and independence
across channels.
The tech that turns supply chains from brittle to unbreakable
When organizations begin crafting a supply chain strategy, one of the most common misconceptions is viewing it as purely a logistics exercise rather than a holistic framework that spans procurement, planning and risk management. Another frequent misstep is underestimating the role of technology. Digital tools are essential for visibility, predictive analytics and automation, not optional. Equally critical is recognizing that strategy is not static, it must evolve continuously to address shifting market conditions and emerging threats. ... Resilience comes from treating cyber and physical risks as one integrated challenge. That means embedding security into every layer of the supply chain, from vendor onboarding to logistics execution, while leveraging advanced visibility tools and zero trust principles. ... Executive buy‑in for resilience investments begins with reframing the conversation from cost to value. We position resilience as a strategic enabler rather than an expense by linking it to business continuity, customer trust and competitive advantage. Instead of focusing solely on immediate ROI, emphasize measurable risk reduction, regulatory compliance and the cost of inaction during disruptions. Use real‑world scenarios and data to show how resilience safeguards revenue streams and accelerates recovery when crises hit. Engage executives early, align initiatives with corporate objectives and present resilience as a driver of long‑term growth and brand reputation.ISO and ISMS: 9 reasons security certifications go wrong
Without management’s commitment, it’s often difficult to get all employees on
board and ensure that ISO standards, or even IT baseline protection standards,
are integrated into daily business operations. As a result, companies should
provide top-down clarity about the importance of such initiatives — even if
implementation can be costly and inconvenient. “Cleaning up” isn’t always
pleasant, but the result is all the more worthwhile. ... Without genuine
integration into daily operations, the certification becomes useless, and the
benefits it offers remain unrealized. In the worst-case scenario, organizations
even end up losing money, while also missing out on the implementation’s
potential value. When integrating a management system, it’s important not to get
bogged down in details. The practical application of the system in real-world
work situations is crucial for its success. ... Employees need to understand why
the implementation is important, how it will be integrated into their daily
workflows, and how it will make their work easier. If this isn’t the case, it
will be difficult to implement the system and maintain any resulting
certification. ... Without a detailed plan, companies focus on areas that are
irrelevant or do not meet the requirements of the ISO/IT baseline protection
standards. Furthermore, if the implementation of a management system takes too
long, regular business development can overtake the process itself, resulting in
duplicated work to keep up with changes.
State of the API 2025: API Strategy Is Becoming AI Strategy
What distinguishes fully API-first teams? They treat APIs as long-lived products with roadmaps, SLAs, versioning, and feedback loops. They align product and engineering early, embed governance into workflows, and standardize patterns so that consumers, human or agent, can rely on consistent contracts. In our experience, that "productization" of APIs is what unlocks long-lived, reusable APIs and parallel delivery. When your agents can trust your schemas, error semantics, and rate-limit behaviors, they can compose capabilities far faster than code-level abstractions ever could. ... As AI agents become primary API consumers, security assumptions must evolve. 51% of developers cite unauthorized or excessive agent calls as a top concern; 49% worry about AI systems accessing sensitive data they shouldn't; and 46% highlight the risk of credential leakage and over-scoped keys. Traditional controls, designed for predictable human traffic, struggle against machine-speed persistence, long-running automation, and credential amplification. ... Even as API-first adoption grows, collaboration remains a bottleneck. 93% of teams report challenges such as inconsistent documentation, duplicated work, and difficulty discovering existing APIs. With 69% of respondents spending 10+ hours per week on API-related tasks, and with a global workforce, asynchronous collaboration is the norm.Embedded Intelligence: JK Tyre's Smart Tyre Use Case
Unlike traditional valve-mounted tire pressure monitoring devices, or TPMS,
these sensors are permanently integrated for consistent data accuracy. Each chip
is designed to last five to seven years, depending on usage and conditions.
"These sensors are permanently embedded during the assembly process," said V.K.
Misra, technical director at JK Tyre. "They continuously send live data on air
pressure and temperature to the dashboard and mobile device. The moment there's
a variation, the driver is alerted before a small problem becomes a serious
risk." ... The embedded version takes this further by integrating the chip
within the tire's internal structure, creating a closed feedback loop between
the tire, the driver and the cloud. "We have created an entire connected
ecosystem," Misra said. "The tire is just the beginning. The data generated
feeds predictive models for maintenance and safety. Through Treel, our platform
can now talk to vehicles, drivers and service networks simultaneously." The
Treel platform processes sensor data through APIs and cloud analytics, providing
actionable insights for drivers and fleet operators. Over time, this data
contributes to predictive maintenance models, product design improvements and
operational analytics for connected vehicles. ... "AI allows decisions that
earlier took days to happen within minutes," Misra said. "It also provides
valuable data on wear patterns and helps us improve quality control across
plants."
No comments:
Post a Comment