Quote for the day:
“Become the kind of leader that people
would follow voluntarily, even if you had no title or position.” --
Brian Tracy

Fast iteration and continuous delivery have become standard in industries like
e-commerce and finance. Healthcare operates under different rules. Here, the
consequences of technical missteps can directly affect care outcomes or
compromise sensitive patient information. Even a small configuration error can
delay a diagnosis or impact patient safety. That reality shifts how DevOps is
applied. The focus is on building systems that behave consistently, meet
compliance standards automatically, and support reliable care delivery at every
step. ... In many healthcare environments, developers are held back by slow
setup processes and multi-step approvals that make it harder to contribute code
efficiently or with confidence. This often leads to slower cycles and fragmented
focus. Modern DevOps platforms help by introducing prebuilt, compliant workflow
templates, secure self-service provisioning for environments, and real-time,
AI-supported code review tools. In one case, development teams streamlined
dozens of custom scripts into a reusable pipeline that provisioned compliant
environments automatically. The result was a noticeable reduction in setup time
and greater consistency across projects. Building on this foundation, DevOps
also play a vital role in development and deployment of the Machine Learning
Models.
The big idea in DevSecOps has always been this: shift security left, embed it
early and often, and make it everyone’s responsibility. This makes DevSecOps
the perfect context for addressing the software understanding gap. Why?
Because the best time to capture visibility into your software’s inner
workings isn’t after it’s shipped—it’s while it’s being built. ... Software
Bill of Materials (SBOMs) are getting a lot of attention—and rightly so. They
provide a machine-readable inventory of every component in a piece of
software, down to the library level. SBOMs are a baseline requirement for
software visibility, but they’re not the whole story. What we need is
end-to-end traceability—from code to artifact to runtime. That
includes:Component provenance: Where did this library come from, and who
maintains it? Build pipelines: What tools and environments were used to
compile the software? Deployment metadata: When and where was this version
deployed, and under what conditions? ... Too often, the conversation around
software security gets stuck on source code access. But as anyone in DevSecOps
knows, access to source code alone doesn’t solve the visibility problem. You
need insight into artifacts, pipelines, environment variables, configurations,
and more. We’re talking about a whole-of-lifecycle approach—not a repo
review.

The legal framework governing generative AI is still evolving. As the
technology continues to advance, the legal requirements will also change.
Although the law is still playing catch-up with the technology, several
jurisdictions have already implemented regulations specifically targeting AI,
and others are considering similar laws. Businesses should stay informed about
emerging regulations and adapt their practices accordingly. ... Several
jurisdictions have already enacted laws that specifically govern the
development and use of AI, and others are considering such legislation. These
laws impose additional obligations on developers and users of generative AI,
including with respect to permitted uses, transparency, impact assessments and
prohibiting discrimination. ... In addition to AI-specific laws, traditional
data privacy and security laws – including the EU General Data Protection
Regulation (GDPR) and U.S. federal and state privacy laws – still govern the
use of personal data in connection with generative AI. For example, under GDPR
the use of personal data requires a lawful basis, such as consent or
legitimate interest. In addition, many other data protection laws require
companies to disclose how they use and disclose personal data, secure the
data, conduct data protection impact assessments and facilitate individual
rights, including the right to have certain data erased.

By drawing from public data sources available online, such as corporate
registries and property ownership records, OSINT tools can provide investigators
with a map of intricate corporate and criminal networks, helping them unmask
UBOs. This means investigators can work more efficiently to uncover connections
between people and companies that they otherwise might not have spotted. ...
External intelligence can help analysts to monitor developments, so that newer
forms of money laundering create fewer compliance headaches for firms. Some of
the latest trends include money muling, where criminals harness channels like
social media to recruit individuals to launder money through their bank
accounts, and trade-based laundering, which allows bad actors to move funds
across borders by exploiting international complexity. OSINT helps identify
these emerging patterns, enabling earlier intervention and minimizing
enforcement risks. ... When it comes to completing suspicious activity reports
(SARs), many financial institutions rely on internal data, spending millions on
transaction monitoring, for instance. While these investments are unquestionably
necessary, external intelligence like OSINT is often neglected – despite it
often being key to identifying bad actors and gaining a full picture of
financial crime risk.

Traditional infrastructure jobs no longer have the allure they once did, with
Silicon Valley and startups capturing the imagination of young talent. Let’s be
honest – it just isn’t seen as ‘sexy’ anymore. But while people dream about
coding the next app, they forget someone has to build and maintain the physical
networks that power everything. And that ‘someone’ is disappearing fast. Another
factor is that the data centre sector hasn’t done a great job of telling its
story. We’re seen as opaque, technical and behind closed doors. Most students
don’t even know what a data centre is, and until something breaks it
doesn’t even register. That’s got to change. We need to reframe the narrative.
Working in data centres isn’t about grey boxes and cabling. It’s about solving
real-world problems that affect billions of people around the world, every
single second of every day. ... Fixing the skills gap isn’t just about hiring
more people. It’s about keeping the knowledge we already have in the industry
and finding ways to pass it on. Right now, we’re on the verge of losing decades
of expertise. Many of the engineers, designers and project leads who built
today’s data centre infrastructure are approaching retirement. While projects
operate at a huge scale and could appear exciting to new engineers, we also have
inherent challenges that come with relatively new sectors.

The main idea is achieving fully decentralized data, even biometric information,
giving individuals even more privacy. “We take their identity structure and we
actually run the matching of the identity inside MPC,” he says. This means that
neither Partisia nor the company that runs the structure has the full biometric
information. They can match it without ever decrypting it, Bundgaard explains.
Partisia says it’s getting close to this goal in its Japan experiment. The
company has also been working on a similar goal of linking digital credentials
to biometrics with U.S.-based Trust Stamp. But it is also developing other
identity-related uses, such as proving age or other information. ... Multiparty
computation protocols are closing that gap: Since all data is encrypted, no one
learns anything they did not already know. Beyond protecting data, another
advantage is that it still allows data analysts to run computations on encrypted
data, according to Partisia. There may be another important role for this
cryptographic technique when it comes to privacy. Blockchain and multiparty
computation could potentially help lessen friction between European privacy
standards, such as eIDAS and GDPR, and those of other countries. “I have one
standard in Japan and I travel to Europe and there is a different standard,”
says Bundgaard.

While headlines trumpet that “95% of generative AI pilots at companies are
failing,” the report actually reveals something far more remarkable: the fastest
and most successful enterprise technology adoption in corporate history is
happening right under executives’ noses. ... The MIT researchers discovered what
they call a “shadow AI economy” where workers use personal ChatGPT accounts,
Claude subscriptions and other consumer tools to handle significant portions of
their jobs. These employees aren’t just experimenting — they’re using AI
“multiples times a day every day of their weekly workload,” the study found. ...
Far from showing AI failure, the shadow economy reveals massive productivity
gains that don’t appear in corporate metrics. Workers have solved integration
challenges that stymie official initiatives, proving AI works when implemented
correctly. “This shadow economy demonstrates that individuals can successfully
cross the GenAI Divide when given access to flexible, responsive tools,” the
report explains. Some companies have started paying attention: “Forward-thinking
organizations are beginning to bridge this gap by learning from shadow usage and
analyzing which personal tools deliver value before procuring enterprise
alternatives.” The productivity gains are real and measurable, just hidden from
traditional corporate accounting.

Indirect prompt injection represents another significant vulnerability in LLMs.
This phenomenon occurs when an LLM follows instructions embedded within the data
rather than the user’s input. The implications of this vulnerability are
far-reaching, potentially compromising data security, privacy, and the integrity
of LLM-powered systems. At its core, indirect prompt injection exploits the
LLM’s inability to consistently differentiate between content it should process
passively (that is, data) and instructions it should follow. While LLMs have
some inherent understanding of content boundaries based on their training, they
are far from perfect. ... Jailbreaks represent another significant vulnerability
in LLMs. This technique involves crafting user-controlled prompts that
manipulate an LLM into violating its established guidelines, ethical
constraints, or trained alignments. The implications of successful jailbreaks
can potentially undermine the safety, reliability, and ethical use of AI
systems. Intuitively, jailbreaks aim to narrow the gap between what the model is
constrained to generate, because of factors such as alignment, and the full
breadth of what it is technically able to produce. At their core, jailbreaks
exploit the flexibility and contextual understanding capabilities of LLMs. While
these models are typically designed with safeguards and ethical guidelines,
their ability to adapt to various contexts and instructions can be turned
against them.

The most innovative organizations aren’t always purely top-down or
bottom-up—they carefully orchestrate combinations of both. Strategic leadership
provides direction and resources, while grassroots innovation offers practical
insights and the capability to adapt rapidly. Chynoweth noted how strategic
portfolio management helps companies “keep their investments in tech aligned to
make sure they’re making the right investments.” The key is creating systems
that can channel bottom-up innovations while ensuring they support the
organization’s strategic objectives. Organizations that succeed in managing both
top-down and bottom-up innovation typically have several characteristics. They
establish clear strategic priorities from leadership while creating space for
experimentation and adaptation. They implement systems for capturing and
evaluating innovations regardless of their origin. And they create mechanisms
for scaling successful pilots while maintaining strategic alignment. The future
belongs to enterprises that can master this balance. Pure top-down enterprises
will likely continue to struggle with implementation realities and changing
market conditions. In contrast, pure bottom-up organizations would continue to
lack the scale and coordination needed for significant impact.
“Digital-first doesn’t mean disconnected – it means being intentional,” she
said. For leaders it creates a culture where the people involved feel supported,
wherever they’re working, she thinks. She adds that while many organisations
found themselves in a situation where the pandemic forced them to establish a
remote-first system, very few actually fully invested in making it work well.
“High performance and innovation don’t happen in isolation,” said Feeney. “They
happen when people feel connected, supported and inspired.” Sentiments which she
explained are no longer nice to have, but are becoming a part of modern
organisational infrastructure. One in which people are empowered to do their
best work on their own terms. ... “One of the biggest challenges I have faced as
a founder was learning to slow down, especially when eager to introduce
innovation. Early on, I was keen to implement automation and technology, but I
quickly realised that without reliable data and processes, these tools could not
reach their full potential.” What she learned was, to do things correctly, you
have to stop, review your foundations and processes and when you encounter an
obstacle, deal with it, because though the stopping and starting might initially
be frustrating, you can’t overestimate the importance of clean data, the right
systems and personnel alignment with new tech.
No comments:
Post a Comment