Quote for the day:
"Hold yourself responsible for a higher standard than anybody expects of you. Never excuse yourself." -- Henry Ward Beecher
From in-house CISO to consultant. What you need to know before making the leap
A growing number of CISOs are either moving into consulting roles or seriously
considering it. The appeal is easy to see: more flexibility and quicker
learning, alongside steady demand for experienced security leaders. Some of
these professionals work as virtual CISOs (vCISOs), advising companies from a
distance. Others operate as fractional CISOs, embedding into the organization
one or two days a week. ... CISOs line up their first clients while they’re
still employed. Otherwise, he says, it can take a long time to build momentum.
And the pressure to make it work can quickly turn into panic. In that moment,
security professionals may start “underpricing themselves because they need
money immediately,” he says. Once rates are set out of desperation, they’re
often hard to reset without straining the relationship. Other
CISOs-turned-consultants also emphasize preparation. ... Many of the skills
CISOs honed inside large organizations translate directly to the new consulting
job, while others suddenly matter more than they ever did before. In addition to
technical skills, it is often the practical ones that prove most valuable. The
ability to prioritize — sharpened over years in a CISO role — becomes especially
important in consulting. ... Crisis management is another essential skill.
Paired with hands-on knowledge of cybersecurity processes and best practices, it
gives former CISOs a real advantage as they move into consulting.New phishing campaign tricks employees into bypassing Microsoft 365 MFA
The message purports to be about a corporate electronic funds payment, a
document about salary bonuses, a voicemail, or contains some other lure. It also
includes a code for ‘Secure Authorization’ that the user is asked to enter when
they click on the link, which takes them to a real Microsoft Office 365 login
page. Victims think the message is legitimate, because the login page is
legitimate, so enter the code. But unknown to the victim, it’s actually the code
for a device controlled by the threat actor. What the victim has done is issued
an OAuth token granting the hacker’s device access to their Microsoft account.
From there, the hacker has access to everything the account allows the employee
to use. Note that this isn’t about credential theft, although if the attacker
wants credentials, they can be stolen. It’s about stealing the victim’s OAuth
access and refresh tokens for persistent access to their Microsoft account,
including to applications such as Outlook, Teams, and OneDrive. ... The main
defense against the latest version of this attack is to restrict the
applications users are allowed to connect to their account, he said. Microsoft
provides enterprise administrators with the ability to allowlist specific
applications that the user may authorize via OAuth. ... The easiest defense is
to turn off the ability to add extra login devices to Office 365, unless it’s
needed, he said. In addition, employees should also be continuously educated
about the risks of unusual login requests, even if they come from a familiar
system.The 200ms latency: A developer’s guide to real-time personalization
The first hurdle every developer faces is the “cold start.” How do you
personalize for a user with no history or an anonymous session? Traditional
collaborative filtering fails here because it relies on a sparse matrix of past
interactions. If a user just landed on your site for the first time, that matrix
is empty. To solve this within a 200ms budget, you cannot afford to query a
massive data warehouse to look for demographic clusters. You need a strategy
based on session vectors. We treat the user’s current session as a real-time
stream. ... Another architectural flaw I frequently encounter is the dogmatic
attempt to run everything in real-time. This is a recipe for cloud bill
bankruptcy and latency spikes. You need a strict decision matrix to decide
exactly what happens when the user hits “load.” We divide our strategy based on
the “Head” and “Tail” of the distribution. ... Speed means nothing if the system
breaks. In a distributed system, a 200ms timeout is a contract you make with the
frontend. If your sophisticated AI model hangs and takes 2 seconds to return,
the frontend spins and the user leaves. We implement strict circuit breakers and
degraded modes. ... We are moving away from static, rule-based systems toward
agentic architectures. In this new model, the system does not just recommend a
static list of items. It actively constructs a user interface based on intent.
This shift makes the 200ms limit even harder to hit. It requires a fundamental
rethink of our data infrastructure.Spec-Driven Development – Adoption at Enterprise Scale
Tech layoffs in 2026: Why skills matter more than experience in tech
The impact of AI on tech jobs India is becoming visible as companies
prioritise data science and machine learning skills over conventional IT
roles. During decades, layoffs were typically associated with the economic
recession or lack of revenue in companies. The difference between the present
wave is the involvement of automation and strategic restructuring. Although
automation has had beneficial impacts on increasing productivity, it implies
that jobs that aim at routine and repetitive duties continue to be at risk.
... The traditional career trajectories based on experience or seniority are
replaced by market needs of niche skills in machine learning, data
engineering, cloud architecture, and product leadership. Employees whose
skills have not increased are more exposed to displacement in the event of
reorganisation of the companies. These developments explain why tech
professionals must reskill to remain employable in an AI-driven industry. The
tech labor force in India, which is also one of the largest in the world, is
especially vulnerable to the change. ... The future of tech jobs in India 2026
will favour professionals who combine technical expertise with analytical and
problem-solving skills. The layoffs in early 2026 explain why the technology
industry is vulnerable to job losses because corporate interests can change
rapidly. To individuals, it entails being future-ready through the development
of skills that would be relevant in the industry direction, including AI
integration, cybersecurity, cloud computing, and advanced analytics.
Secrets Management Failures in CI/CD Pipelines
Hardcoded secrets are still the most entrenched security issue. API keys, access tokens and private certificates continue to live in the configuration files of the pipeline, shell scripts or application manifests. While the repository is private, security exposure is the result of only one misconfiguration or breached account. Once committed, secrets linger for months or even years, far outlasting the necessary rotation period. Another common failure is secret sprawl. CI/CD pipelines accumulate credentials over time with no clear ownership. Old tokens remain active because nobody remembers which service depends on them. Thus, as the pipeline develops, secrets management becomes reactive rather than intentional, compromising the likelihood of exposing credentials. Over-permissioned credentials make things worse. ... Technology is not the reason for most secrets management failures; it’s people. Developers tend to copy and paste credentials when they’re trying to get to the bottom of some problem or other. They might even just bypass the security safeguards because things are tight against the wire. It’s pretty easy for nobody to keep absolutely on top of their security posture as your CI/CD pipelines evolve. It’s just exactly for this reason that a DevSecOps culture is important. It has got to be more than just the tools; it has got to be how we all work together to get the job done. Security teams must recognize that what is needed is to consider the CI/CD pipeline as production infrastructure, not some internal tool that can be altered ‘on the fly’.Agentic AI systems don’t fail suddenly — they drift over time
As organizations move from experimentation to real operational deployment of
agentic AI, a new category of risk is emerging — one that traditional AI
evaluation, testing and governance practices often struggle to detect. ...
Most enterprise AI governance practices evolved around a familiar mental
model: a stateless model receives an input and produces an output. Risk is
assessed by measuring accuracy, bias or robustness at the level of individual
predictions. Agentic systems strain that model. The operational unit of risk
is no longer a single prediction, but a behavioral pattern that emerges over
time. An agent is not a single inference. It is a process that reasons across
multiple steps, invokes tools and external services, retries or branches when
needed, accumulates context over time and operates inside a changing
environment. Because of that, the unit of failure is no longer a single
output, but the sequence of decisions that leads to it. ... In real
environments, degradation rarely begins with obviously incorrect outputs. It
shows up in subtler ways, such as verification steps running less
consistently, tools being used differently under ambiguity, retry behavior
shifting or execution depth changing over time. ... Without operational
evidence, governance tends to rely more on intent and design assumptions than
on observed reality. That’s not a failure of governance so much as a missing
layer. Policy defines what should happen, diagnostics help establish what is
actually happening and controls depend on that evidence.Prompt Control is the New Front Door of Application Security
Application security has always been built around a simple assumption: There
is a front door. Traffic enters through known interfaces, authentication
establishes identity, authorization constrains behavior, and controls
downstream enforcement of policy. That model still exists, but our most recent
research shows it no longer captures where risk actually concentrates in
AI-driven systems. ... Prompts are where intent enters the system. They define
not only what a user is asking, but how the model should reason, what context
it should retain, and which safeguards it should attempt to bypass. That is
why prompt layers now outrank traditional integration points as the most
impactful area for both application security and delivery. ... Output
moderation still matters, and our research shows it remains a meaningful
concern. But its lower ranking is telling. Output controls catch problems
after the system has already behaved badly. They are essential guardrails, not
primary defenses. It’s always more efficient to stop the thief on the way in
rather than try to catch him after the fact, and in the case of inference,
it’s less costly because stopping on the ingress means no token processing
costs incurred. ... Our second set of findings reinforces this point.
Authentication and observability lead the methods organizations use to secure
and deliver AI inference services, cited by 55% and 54% of respondents,
respectively. This holds true across roles, with the exception of developers,
who more often prioritize protection against sensitive data leaks.The 'last-mile' data problem is stalling enterprise agentic AI — 'golden pipelines' aim to fix it
Traditional ETL tools like dbt or Fivetran prepare data for reporting:
structured analytics and dashboards with stable schemas. AI applications need
something different: preparing messy, evolving operational data for model
inference in real-time. Empromptu calls this distinction "inference integrity"
versus "reporting integrity." Instead of treating data preparation as a
separate discipline, golden pipelines integrate normalization directly into
the AI application workflow, collapsing what typically requires 14 days of
manual engineering into under an hour, the company says. Empromptu's "golden
pipeline" approach is a way to accelerate data preparation and make sure that
data is accurate. ... "Enterprise AI doesn't break at the model layer, it
breaks when messy data meets real users," Shanea Leven, CEO and co-founder of
Empromptu told VentureBeat in an exclusive interview. "Golden pipelines bring
data ingestion, preparation and governance directly into the AI application
workflow so teams can build systems that actually work in production." ...
Golden pipelines target a specific deployment pattern: organizations building
integrated AI applications where data preparation is currently a manual
bottleneck between prototype and production. The approach makes less sense for
teams that already have mature data engineering organizations with established
ETL processes optimized for their specific domains, or for organizations
building standalone AI models rather than integrated applications.From installation to predictive maintenance: The new service backbone of AI data centers
AI workloads bring together several shifts at once: much higher rack densities,
more dynamic load profiles, new forms of cooling, and tighter integration
between electrical and digital systems. A single misconfiguration in the power
chain can have much wider consequences than would have been the case in a
traditional facility. This is happening at a time when many operators struggle
to recruit and retain experienced operations and maintenance staff. The
personnel on site often have to cope with hybrid environments that combine
legacy air-cooled rooms with liquid-ready zones, energy storage, and multiple
software layers for control and monitoring. In such an environment, services are
not a ‘nice to have’. ... As architectures become more intricate, human error
remains one of the main residual risks. AI-ready infrastructures combine complex
electrical designs, liquid cooling circuits, high-density rack layouts, and
multiple software layers such as EMS, BMS and DCIM. Operating and maintaining
such systems safely requires clear procedures and a high level of discipline.
... In an AI-driven era, service strategy is as important as the choice of UPS
topology, cooling technology or energy storage. Commissioning, monitoring,
maintenance, and training are not isolated activities. Together, they form a
continuous backbone that supports the entire lifecycle of the data center.
Well-designed service models help operators improve availability, optimise
energy performance and make better use of the assets they already have.
No comments:
Post a Comment