Quote for the day:
"Individual commitment to a group effort - that is what makes a team work, a company work, a society work, a civilization work." -- Vince Lombardi
AI demands more software developers, not less

Entry-level software development will change in the face of AI, but it won’t go
away. As LLMs increasingly handle routine coding tasks, the traditional
responsibilities of entry-level developers—such as writing boilerplate code—are
diminishing. Instead their roles will evolve into AI supervisors; they’ll test
outputs, manage data labeling, and integrate code into broader systems. This
necessitates a deeper understanding of software architecture, business logic,
and user needs. Doing this effectively requires a certain level of experience
and, barring that, mentorship. The dynamic between junior and senior engineers
is shifting. Seniors need to mentor junior developers in AI tool usage and code
evaluation. Collaborative practices such as AI-assisted pair programming will
also offer learning opportunities. Teams are increasingly co-creating with AI;
this requires clear communication and shared responsibilities across experience
levels. Such mentorship is essential to prevent more junior engineers from
depending too heavily on AI, which results in shallow learning and a downward
spiral of productivity loss. Across all skill levels, companies are scrambling
to upskill developers in AI and machine learning. A late-2023 survey in the
United States and United Kingdom showed 56% of organizations listed prowess in
AI/ML as their top hiring priority for the coming year.
Ask a CIO Recruiter: How AI is Shaping the Modern CIO Role

Everything right now revolves around AI, but you still as CIO have to have that
grounding in all of the traditional disciplines of IT. Whether that is systems,
whether that’s infrastructure, whether that’s cybersecurity, you have to have
that well-rounded background. Even as these AI technologies become more
prolific, you must consider your past infrastructure spend, your cloud spend,
that went into these technologies. How do you manage that? If you don’t have
grounding in managing those costs, and being able to balance those costs with
the innovation you are trying to create, that’s a recipe for failure on the
cyber side. ... When we’re looking for skill sets, we’re looking for people who
have actually taken those AI technologies and applied them within their
organizations to create real business value -- whether that is cost savings or
top-line revenue creation, whatever those are. It’s hard to find those
candidates, because there are a lot of those people who can talk the talk around
AI, but when you really drill down there is not much in terms of results to
show. It’s new, especially in applying the technology to certain settings. Take
manufacturing: there’s not that many CIOs out there who have great examples of
applying AI to create value within organizations. It’s certainly accelerating,
and you’re going to see it accelerating more as we go into the future. It’s just
so new that those examples are few and far between.
Architectural Experimentation in Practice: Frequently Asked Questions
/articles/architectural-experimentation-faq/en/smallimage/architectural-experimentation-faq-thumbnail-1743578735799.jpg)
When the cost of reversing a decision is low or trivial, experimentation does
not reduce cost very much and may actually increase cost. Prior experience
with certain kinds of decisions usually guides the choice; if team members
have worked on similar systems or technical challenges, they will have an
understanding of how easily a decision can be reversed. ... Experiments are
more than just playing around with technology. There is a place for playing
with new ideas and technologies in an unstructured, exploratory way, and
people often say that they are "experimenting" when they are doing this. When
we talk about experimentation, we mean a process that involves forming a
hypothesis and then building something that tests this hypothesis, either
accepting or rejecting it. We prefer to call the other approach "unstructured
exploratory learning", a category that includes hackathons, "10% Time", and
other professional development opportunities. ... Experiments should have a
clear duration and purpose. When you find an experiment that’s not yielding
results in the desired timeframe, it’s time to stop it and design something
else to test your hypothesis that will yield more conclusive results. The
"failed" experiment can still yield useful information, as it may indicate
that the hypothesis is difficult to prove or may influence subsequent, more
clearly defined experiments.
Optimizing IT with Open Source: A Guide to Asset Management Solutions

Orchestration frameworks are crucial for developing sophisticated AI
applications that can perform tasks beyond simply answering a single question.
While a single LLM is proficient in understanding and generating text, many
real-world AI applications require performing a series of steps involving
different components. Orchestration frameworks provide the structure necessary
to design and manage these complex workflows, ensuring that all the various
components of the AI system work together efficiently. ... One way
orchestration frameworks enhance the power of LLMs is through a technique
known as “prompt chaining.” Think of it as telling a story one step at a time.
Instead of giving the LLM a single, lengthy instruction, you provide it with a
series of more minor, interconnected instructions known as prompts. The
response from one prompt then becomes the starting point for the following
prompt, guiding the LLM through a more complex thought process. Open-source
orchestration frameworks make it much simpler to create and manage these
chains of prompts. They often provide tools that allow developers to easily
link prompts together, sometimes through visual interfaces or programming
tools. Prompt chaining can be helpful in many situations.
Reframing DevSecOps: Software Security to Software Safety

A templatized, repeatable, process-led approach, driven by collaboration between
platform and security teams, leads to a fundamental shift in the way teams think
about their objectives. They move from the concept of security, which promises a
state free from danger or threat, to safety, which focuses on creating systems
that are protected from and unlikely to create danger. This shift emphasizes
proactive risk mitigation through thoughtful, reusable design patterns and
implementation rather than reactive threat mitigation. ... The outcomes between
security products and product security are vastly different with the latter
producing far greater value. Instead of continuing to shift responsibilities,
development teams should embrace the platform security engineering paradigm. By
building security directly into shared processes and operations, development
teams can scale up to meet their needs today and in the future. Only after these
strong foundations have been established should teams layer in routinely run
security tools for assurance and problem identification. This approach, combined
with aligned incentives and genuine collaboration between teams, creates a more
sustainable path to secure software development that works at scale.
10 things you should include in your AI policy

A carefully thought AI use policy can help a company set criteria for risk and
safety, protect customers, employees, and the general public, and help the
company zero in on the most promising AI use cases. “Not embracing AI in a
responsible manner is actually reducing your advantage of being competitive in
the marketplace,” says Bhrugu Pange, managing director who leads the technology
services group at AArete, a management consulting firm. ... An AI policy needs
to start with the organization’s core values around ethics, innovation, and
risk. “Don’t just write a policy to write a policy to meet a compliance
checkmark,” says Avani Desai, CEO at Schellman, a cybersecurity firm that works
with companies on assessing their AI policies and infrastructure. “Build a
governance framework that’s resilient, ethical, trustworthy, and safe for
everyone — not just so you have something that nobody looks at.” Starting with
core values will help with the creation of the rest of the AI policy. “You want
to establish clear guidelines,” Desai says. “You want everyone from top down to
agree that AI has to be used responsibly and has to align with business ethics.”
... Taking a risk-based approach to AI is a good strategy, says Rohan Sen, data
risk and privacy principal at PwC. “You don’t want to overly restrict the
low-risk stuff,” he says.
FedRAMP's Automation Goal Brings Major Promises - and Risks

FedRAMP practitioners, federal cloud security specialists and cybersecurity
professionals who spoke to Information Security Media Group welcomed the push to
automate security assessments and streamline approvals. They warned that without
clear details on execution, the changes risk creating new uncertainties in the
process and disrupt companies midway through the exiting process. Program
officials said they will establish a series of community working groups to serve
as a platform for industry and the public to engage directly with FedRAMP
experts and collaborate on solutions that meet its standards and policies. "This
is both exciting and scary," said John Allison, senior director of federal
advisory services for the federal cybersecurity solutions provider, Optiv +
ClearShark. "As someone who works with clients on their FedRAMP strategy, this
is going to open new options for companies - but I can see a lot of uncertainty
weighing heavily on corporate leadership until more details are available."
Automation may help reduce costs and timelines, he said, but companies
mid-process could face disruption and agencies will shoulder more responsibility
until new tools are in place. Allison said GSA could further streamline FedRAMP
by allowing cloud providers to submit materials directly and pursue
authorization without an agency sponsor.
Is hyperscaler lock-in threatening your future growth?
Infrastructure flexibility has increasingly become a competitive differentiator.
Enterprises that maintain the ability to deploy workloads across multiple
environments—whether hyperscaler, private cloud, or specialized provider—gain
strategic advantages that extend beyond operational efficiency. This cloud
portability empowers organizations to select the optimal infrastructure for each
application and workload based on their specific requirements rather than
provider limitations. When a new service emerges that delivers substantial
business value, companies with diversified infrastructure can adopt it without
dismantling their existing technology stack. Central to maintaining this
flexibility is the strategic adoption of open source technologies.
Enterprise-grade open source solutions provide the consistency and portability
that proprietary alternatives cannot match. By standardizing on technologies
like Kubernetes for container orchestration, PostgreSQL for database services,
or Apache Kafka for event streaming, organizations create a foundation that
works consistently across any infrastructure environment. The most resilient
enterprises approach their technology stack like a portfolio manager approaches
investments—diversifying strategically to maximize returns while minimizing
exposure to any single point of failure.
7 risk management rules every CIO should follow

The most critical risk management rule for any CIO is maintaining a
comprehensive, continuously updated inventory of the organization’s entire
application portfolio, proactively identifying and mitigating security risks
before they can materialize, advises Howard Grimes, CEO of the Cybersecurity
Manufacturing Innovation Institute, a network of US research institutes focusing
on developing manufacturing technologies through public-private partnerships.
That may sound straightforward, but many CIOs fall short of this fundamental
discipline, Grimes observes. ... Cybersecurity is now a multi-front war, Selby
says. “We no longer have the luxury of anticipating the attacks coming at us
head-on.” Leaders must acknowledge the interdependence of a robust risk
management plan: Each tier of the plan plays a vital role. “It’s not merely a
cyber liability policy that does the heavy lifting or even top-notch employee
training that makes up your armor — it’s everything.” The No. 1 way to minimize
risk is to start from the top down, Selby advises. “There’s no need to decrease
cyber liability coverage or slack on a response plan,” he says. Cybersecurity
must be an all-hands-on-deck endeavor. “Every team member plays a vital role in
protecting the company’s digital assets.”
Shift-Right Testing: Smart Automation Through AI and Observability
Shift-right testing goes beyond the conventional approach of performing
pre-release testing, thereby enabling the development teams to deploy the
software in real-time conditions. This approach includes canary releases where
new features are released to a subset of users before the full launch. It also
involves A/B testing, where two versions of the application are compared in real
time. Another important feature is chaos engineering, which implies that
failures are deliberately introduced to check the strength of the system. ...
Chaos engineering is the practice of injecting controlled failures into the
system to assess its robustness with the help of tools like Chaos Monkey and
Gremlin. This helps validate the actual behavior of a system in a
production-like environment. All the testing feedback loops are also automated
to ensure that Shift-Right is applied consistently by using AI-powered test
analytics tools like Testim and Applitools to learn from test case selection.
This makes it possible to use production data to inform the automatic generation
of test suites, thus increasing coverage and precision. Real-time alerting and
self-healing mechanisms also enhance shift-right testing. Observability tools
can be set up to send out alerts whenever a test fails and auto-remediation
scripts to enable the environment to repair itself when test environments fail
without the need to involve the IT staff.
No comments:
Post a Comment