Quote for the day:
“Success does not consist in never making mistakes but in never making the same one a second time.” --George Bernard Shaw
The Maturing State of Infrastructure as Code in 2025

The progression from cloud-specific frameworks to declarative, multicloud
solutions like Terraform represented the increasing sophistication of IaC
capabilities. This shift enabled organizations to manage complex environments
with never-before-seen efficiency. The emergence of programming language-based
IaC tools like Pulumi then further blurred the lines between application
development and infrastructure management, empowering developers to take a
more active role in ops. ... For DevOps and platform engineering leaders, this
evolution means preparing for a future where cloud infrastructure management
becomes increasingly automated, intelligent and integrated with other aspects
of the software development life cycle. It also highlights the importance of
fostering a culture of continuous learning and adaptation, as the IaC
landscape continues to evolve at a rapid pace. ... Firefly’s “State of
Infrastructure as Code (IaC)” report is an annual pulse check on the rapidly
evolving state of IaC adoption, maturity and impact. Over the course of the
past few editions, this report has become an increasingly crucial resource for
DevOps professionals, platform engineers and site reliability engineers (SREs)
navigating the complexities of multicloud environments and a changing IaC
tooling landscape.
Consent Managers under the Digital Personal Data Protection Act: A Game Changer or Compliance Burden?
The use of Consent Managers provides advantages for both Data Fiduciaries and
Data Principals. For Data Fiduciaries, Consent Managers simplify compliance
with consent-related legal requirements, making it easier to manage and
document user consent in line with regulatory obligations. For Data
Principals, Consent Managers offer a streamlined and efficient way to grant,
modify, and revoke consent, empowering them with greater control over how
their personal data is shared. This enhanced efficiency in managing consent
also leads to faster, more secure, and smoother data flows, reducing the
complexities and risks associated with data exchanges. Additionally, Consent
Managers play a crucial role in helping Data Principals exercise their right
to grievance redressal. ... Currently, Data Fiduciaries can manage user
consent independently, making the role of Consent Managers optional. If this
remains voluntary, many companies may avoid them, reducing their
effectiveness. For Consent Managers to succeed, they need regulatory support,
flexible compliance measures, and a business model that balances privacy
protection with industry participation. ... Rooted in the fundamental right to
privacy under Article 21 of the Constitution of India, the DPDPA aims to
establish a structured approach to data processing while preserving individual
control over personal information.
The future of AI isn’t the model—it’s the system

Enterprise leaders are thinking differently about AI in 2025. Several founders
here told me that unlike in 2023 and 2024, buyers are now focused squarely on
ROI. They want systems that move beyond pilot projects and start delivering
real efficiencies. Mensch says enterprises have developed “high expectations”
for AI, and many now understand that the hard part of deploying it isn’t
always the model itself—it’s everything around it: governance, observability,
security. Mistral, he says, has gotten good at connecting these layers, along
with systems that orchestrate data flows between different models and
subsystems. Once enterprises grapple with the complexity of building full AI
systems—not just using AI models—they start to see those promised
efficiencies, Mensch says. But more importantly, C-suite leaders are beginning
to recognize the transformative potential. Done right, AI systems can
radically change how information moves through a company. “You’re making
information sharing easier,” he says. Mistral encourages its customers to
break down silos so data can flow across departments. One connected AI system
might interface with HR, R&D, CRM, and financial tools. “The AI can
quickly query other departments for information,” Mensch explains. “You no
longer need to query the team.”
Generative AI is finally finding its sweet spot, says Databricks chief AI scientist

Beyond the techniques, knowing what apps to build is itself a journey and
something of a fishing expedition. "I think the hardest part in AI is having
confidence that this will work," said Frankle. "If you came to me and said,
'Here's a problem in the healthcare space, here are the documents I have, do
you think AI can do this?' my answer would be, 'Let's find out.'" ... "Suppose
that AI could automate some of the most boring legal tasks that exist?"
offered Frankle, whose parents are lawyers. "If you wanted an AI to help you
do legal research, and help you ideate about how to solve a problem, or help
you find relevant materials -- phenomenal!" "We're still in very early days"
of generative AI, "and so, kind of, we're benefiting from the strengths, but
we're still learning how to mitigate the weaknesses." ... In the midst of
uncertainty, Frankle is impressed with how customers have quickly traversed
the learning curve. "Two or three years ago, there was a lot of explaining to
customers what generative AI was," he noted. "Now, when I talk to customers,
they're using vector databases." "These folks have a great intuition for where
these things are succeeding and where they aren't," he said of Databricks
customers. Given that no company has an unlimited budget, Frankle advised
starting with an initial prototype, so that investment only proceeds to the
extent that it's clear an AI app will provide value.
Australia’s privacy watchdog publishes regulatory strategy prioritizing biometrics

The strategy plan includes a table of activities and estimated timelines, a
detailed breakdown of actions in specific categories, and a list of projected
long- and short-term outcomes. The goals are ambitious in scope: a desired
short-term outcome is to “mature existing awareness about privacy across
multiple domains of life” so that “individuals will develop a more nuanced
understanding of privacy issues recognising their significance across various
aspects of their lives, including personal, professional, and social domains.”
Laws, skills training and better security tools are one thing, but changing
how people understand their privacy is a major social undertaking. The OAIC’s
long-term outcomes seem more rooted in practicality; they include the
widespread implementation of enhanced privacy compliance practices for
organizations, better public understanding of the OAIC’s role as regulator,
and enhanced data handling industry standards. ... AI is a matter of going
concern, and compliance for model training and development will be a major
focus for the regulator. In late February, Kind delivered a speech on privacy
and security in retail that references her decision on the Bunnings case,
which led to the publication of guidance on the use of facial recognition
technology, focused on four key privacy concepts: necessity/proportionality,
consent/transparency, accuracy/bias, and governance.
Hiring privacy experts is tough — here’s why

“Some organizations think, ‘Well, we’re funding security, and privacy is
basically the same thing, right?’ And I think that’s really one of my big
concerns,” she says. This blending of responsibilities is reflected in
training practices, according to Kazi, who notes how many organizations
combine security and privacy training, which isn’t inherently problematic, but
it carries risks. “One of the questions we ask in our survey is, ‘Do you
combine security training and privacy training?’ Some organizations say they
do not necessarily see it as a bad thing, but you can … be doing security, but
you’re not doing privacy. And so that’s what’s highly concerning is that you
can’t have privacy without security, but you could potentially do security
well without considering privacy.” As Trovato emphasizes, “cybersecurity
people tend to be from Mars and privacy people from Venus”, yet he also
observes how privacy and cybersecurity professionals are often grouped
together, adding to the confusion about what skills are truly needed. ...
“Privacy includes how are we using data, how are you collecting it, who are
you sharing it with, how are you storing it — all of these are more subtle
component pieces, and are you meeting the requirements of the customer, of the
regulator, so it’s a much more outward business focus activity day-to-day
versus we’ve got to secure everything and make sure it’s all protected.”
Security Maturity Models: Leveraging Executive Risk Appetite for Your Secure Development Evolution

With developers under pressure to produce more code than ever before,
development teams need to have a high level of security maturity to avoid
rework. That necessitates having highly skilled personnel working within a
strategic, prevention-focused framework. Developer and AppSec teams must work
closely together, as opposed to the old model of operating as separate
entities. Today, developers need to assume a significant role in ensuring
security best practices. The most recent BSIMM report from Black Duck
Software, for instance, found that there are only 3.87 AppSec professionals
for every 100 developers, which doesn’t bode well for AppSec teams trying to
secure an organization’s software all on their own. A critical part of
learning initiatives is the ability to gauge the progress of developers in the
program, both to ensure that developers are qualified to work on the
organization’s most sensitive projects and to assess the effectiveness of the
program. This upskilling should be ongoing, and you should always look for
areas that can be improved. Making use of a tool like SCW’s Trust Score, which
uses benchmarks to gauge progress both internally and against industry
standards, can help ensure that progress is being made.
Why thinking like a tech company is essential for your business’s survival
The phrase “every company is a tech company” gets thrown around a lot, but
what does that actually mean? To us, it’s not just about using technology —
it’s about thinking like a tech company. The most successful tech companies
don’t just refine what they already do; they reinvent themselves in
anticipation of what’s next. They place bets. They ask: Where do we need to be
in five or 10 years? And then, they start moving in that direction while
staying flexible enough to adapt as the market evolves. ... Risk management is
part of our DNA, but AI presents new types of risks that businesses haven’t
dealt with before. ... No matter how good our technology is, our success
ultimately comes down to people. And we’ve learned that mindset matters more
than skill set. When we launched an AI proof-of-concept project for our
interns, we didn’t recruit based on technical acumen. Instead, we looked for
curious, self-starting individuals willing to experiment and learn. What
we found was eye-opening—these interns thrived despite having little prior
experience with AI. Why? Because they asked great questions, adapted quickly,
and weren’t afraid to explore. ... Aligning your culture, processes and
technology strategy ensures you can adapt to a rapidly changing landscape
while staying true to your core purpose.
Realizing the Internet of Everything

The obvious answer to this problem is governance, a set of rules that
constrain use and technology to enforce them. The problem, as it is so often
with the “obvious,” is that setting the rules would be difficult and
constraining use through technology would be difficult to do, and probably
harder to get people to believe in. Think about Asimov’s Three Laws of
Robotics and how many of his stories focused on how people worked to get
around them. Two decades ago, a research lab did a video collaboration
experiment that involved a small camera in offices so people could communicate
remotely. Half the workforce covered their camera when they got in. I know
people who routinely cover their webcams when they’re not on a scheduled video
chat or meeting, and you probably do too. So what if the light isn’t on?
Somebody has probably hacked in. Social concerns inevitably collide with
attempts to integrate technology tightly with how we live. Have we reached a
point where dealing with those concerns convincingly is essential in letting
technology improve our work, our lives, further? We do have widespread, if not
universal, video surveillance. On a walk this week, I found doorbell cameras
or other cameras on about a quarter of the homes I passed, and I’d bet there
are even more in commercial areas.
Cloud Security Architecture: Your Guide to a Secure Infrastructure
Threat modeling can be a good starting point, but it shouldn't end with a
stack-based security approach. Rather than focusing solely on the
technologies, approach security by mapping parts of your infrastructure to
equivalent security concepts. Here are some practical suggestions and areas to
zoom in on for implementation. ... When protecting workloads in the cloud,
consider using some variant of runtime security. Kubernetes users have no
shortage of choice here with tools such as Falco, an open-source runtime
security tool that monitors your applications and detects anomalous behaviors.
However, chances are your cloud provider has some form of dynamic threat
detection for your workloads. For example, AWS offers Amazon GuardDuty, which
continuously monitors your workloads for malicious activity and unauthorized
behavior. ... Implementing two-factor authentication adds an extra layer
of protection by requiring a second form of verification, such as an
authenticator app or a passkey, in addition to your password. While reaching
for your authenticator app every time you log in might seem slightly
inconvenient, it's a far better outcome than dealing with the aftermath of a
breached account. The minor inconvenience is a small price to pay for the
added security it provides.
No comments:
Post a Comment