Preparing for AI-Augmented Software Engineering
AI-augmented approaches will free software engineers to focus on tasks that
require critical thinking and creativity, predicts John Robert, deputy director
of the software solutions division of the Carnegie Mellon University Software
Engineering Institute. "A key potential benefit that excites most enthusiasts of
AI-augmented software engineering approaches is efficiency -- the ability to
develop more code in less time and lower the barrier to entry for some tasks."
Teaming humans and AI will shift the attention of humans to the conceptual tasks
that computers aren't good at while reducing human error from tasks where AI can
help, he observes in an email interview. ... Hall notes that GenAI can access
vast amounts of data to analyze market trends, current user behavior, customer
feedback, and usage data to help identify key features that are in high demand
and have the potential to deliver significant value to users. "Once features are
described and prioritized, multiple agents can create the software program's
components." This approach breaks down big tasks into multiple activities with
an overall architecture. "It truly changes how we solve complex issues and apply
technology."
Code Busters: Are Ghost Engineers Haunting DevOps Productivity?
The assertion here is that almost 10% of software application developers do
effectively nothing all day, or indeed all week. For wider clarification, the
remote worker segment has more outlier positive performers, but in-office
workers exhibit a higher average performance overall. ... “Many ghost
engineers I’ve talked to share a common story, i.e. they become disengaged due
to frustration or loss of motivation in their roles. Over time, they may test
the limits of how much effort they can reduce without consequence. This
gradual disengagement often results in them turning into ghosts; originally
not out of malice, but as a by-product of their work environment.” He says
that managers want to build high-performing teams but face conflicting
incentives. A poorly performing team reflects badly on its leadership, leading
some to downplay problems rather than address them head-on. Additionally,
organizational politics may discourage reducing team sizes, even when smaller,
more focused teams could be more effective. ... “There’s also the fact that
senior leaders are often further removed from day-to-day operations. Their
decisions are based on trust in middle management or flawed metrics, such as
lines of code or commit counts. They, too, are sometimes not incentivized to
reduce team sizes or deeply investigate performance issues, as their focus
tends to be on higher-level strategic outcomes,” said Denisov-Blanch.
Why Data Centers Must Strengthen Network Resiliency In The Age of AI
If a network outage occurs, there will be widespread disruptions, negatively
affecting businesses globally. In particular, network outages will compromise
the accessibility of AI applications, the very thing data centers scaled to
support. Outages—and even reduced performance—carry significant risks, both
financial and reputational. Data centers must therefore adopt network
solutions, like Failover to Cellular and out-of-band (OOB) management, to
ensure AI services remain accessible amid disruptions to normal operations.
... OOB management capabilities and Failover to Cellular integration lay a
solid foundation for network resilience. However, data centers don’t need to
stop there. AI integrations promise further enhancements, elevating these
tools to the next level through advanced intelligence and automation. While it
may seem odd to use AI when the extra stress on data centers today comes from
increased AI usage, the advanced capabilities and accompanying benefits of
this technology speak for themselves. AI’s ability to analyze patterns allows
it to detect connectivity issues that could cause failures. When combined with
Failover to Cellular, for example, AI orchestrates a seamless Failover to
Cellular backup, especially during peak traffic. AI can also automatically
take proactive measures like predictive maintenance or rerouting traffic,
reducing downtime and improving resilience.
Financial services need digital identity stitched together, investors take note
Financial institutions are all looking for a low friction, high accuracy way
of authenticating customers, prospects and business partners that also keeps
regulators happy. Some of the approaches and techniques used by established
players in the digital identity market have achieved good volume and scale,
and newer innovative methods are still proving themselves. Byunn highlights
the opportunity in a third layer that’s “all about how you stitch these things
together, because so far no one has produced a single solution that addresses
everything.” This layer, he says, includes both “orchestration” and elements
of holistic scoring (heuristics etc.) “that are not fully covered by what the
market calls orchestration.” Earlier waves of technology serving financial
services companies were thoroughly penetrated by fraudsters, and in some cases
offered poor user experience, Byunn says. One example of this, knowledge-based
authentication, remains “shockingly still prevalent in the industry.” ... The
threat of deepfakes to financial service institutions seems to be commonly
overstated at this time, according to Byunn, at least in part because
conventional wisdom is also somewhat underestimating the effectiveness of
market leaders’ defense against genAI and deepfakes. However, he notes that
the threat has the potential to grow significantly.
The world is running short of copper - telecoms networks could be the answer
Copper remains foundational in older telecom networks, particularly in Europe
and North America, with incumbent operators like AT&T, Orange, and BT.
However, networks are actively transitioning from copper to fiber optics
particularly with ‘last mile connectivity’ and the replacement of
infrastructure like Public Switched Telephone Networks (PSTN). While recycling
from these sources may not completely plug the 20 percent gap in supply, it
can go a long way. It almost goes without saying, that precious metals
reclaimed this way have far less environmental impact - around 15 times less.
Purchasing copper from these sources is still often cheaper than mining it.
... Over the next eight to ten years, an estimated 800,000 tons of copper
could be extracted from telecom networks as part of the global shift to fiber
optics. ... Unlocking the value of reclaimed copper is both an environmental
and strategic win, especially with the soaring demand for this vital resource.
Through effective partnerships and advanced material recovery processes,
telecom companies can transform what was once surplus to requirements into a
valuable asset. Extracted copper can re-enter the supply chain, supporting the
broader green transition and reducing reliance on new mining
operations.
8 biggest cybersecurity threats manufacturers face
The manufacturing sector’s rapid digital transformation, complex supply
chains, and reliance on third-party vendors make for a challenging cyber
threat environment for CISOs. Manufacturers — often prime targets for
state-sponsored malicious actors and ransomware gangs — face the difficult
task of maintaining cost-effective operations while modernizing their network
infrastructure. “Many manufacturing systems rely on outdated technology that
lacks modern security measures, creating exploitable vulnerabilities,” says
Paul Cragg, CTO at managed security services firm NormCyber. “This is
exacerbated by the integration of industrial internet of things [IIoT]
devices, which expand the attack surface.” ... “While industries like
chemicals and semiconductors exhibit relatively higher cybersecurity maturity,
others, such as food and beverage or textiles, lag significantly,” Belal says.
“Even within advanced sectors, inconsistencies persist across organizations.”
Operational technology systems — which may include complex robotics and
automation components — are typically replaced far more slowly than components
of IT networks are, contributing to the growing security debt that many
manufacturers carry.
What is a data scientist? A key data analytics role and a lucrative career
Data scientists often work with data analysts, but their roles differ
considerably. Data scientists are often engaged in long-term research and
prediction, while data analysts seek to support business leaders in making
tactical decisions through reporting and ad hoc queries aimed at describing
the current state of reality for their organizations based on present and
historical data. So the difference between the work of data analysts and that
of data scientists often comes down to timescale. A data analyst might help an
organization better understand how its customers use its product in the
present moment, whereas a data scientist might use insights generated from
that data analysis to help design a new product that anticipates future
customer needs. ... Data scientists need to manipulate data, implement
algorithms, and automate tasks, and proficiency in programming is essential.
Van Loon notes that critical languages include Python, R, and SQL.
... They need a strong foundation in both to analyze data accurately and
make informed decisions. They also need to understand statistical tests,
distributions, likelihoods, and concepts such as hypothesis testing,
regression analysis, and Bayesian inference.
How Active Archives Address AI’s Growing Energy and Storage Demands
Archives were once considered repositories of data that would only be accessed
occasionally, if at all. The advent of modern AI has changed the equation.
Almost all enterprise data could be valuable if made available to an AI
engine. Therefore, many enterprises are turning to archiving to gather
organizational data in one place and make it available for AI and GenAI tools
to access. Massive data archives can be stored in an active archive at a
cost-efficient price and at very low energy consumption levels, all while
keeping that data readily available on the network. Decades of archived data
can then be analyzed as part of an LLM or other machine learning or deep
learning algorithm. ... An intelligent data management software layer is the
foundation of an active archive. This software layer plays a vital role in
automatically moving data according to user-defined policies to where it
belongs for cost, performance, and workload priorities. High-value data that
is often accessed can be retained in memory. Other data can reside on SSDs,
lower tiers of disks, and within a tape- or cloud-based active archive. This
allows AI applications to mine all that data without being subjected to delays
due to content being stored offsite or having to be transferred to where AI
can process it.
The Growing Importance of AI Governance
The goal of AI governance is to ensure that the benefits of machine learning
algorithms and other forms of artificial intelligence are available to
everyone in a fair and equitable manner. AI governance is intended to promote
the ethical application of the technology so that its use is transparent,
safe, private, accountable, and free of bias. To be effective, AI governance
must bring together government agencies, researchers, system designers,
industry organizations, and public interest groups. ... The long-term success
of AI depends on gaining public trust as much as it does on the technical
capabilities of AI systems. In response to the potential threats posed by
artificial intelligence, the U.S. Office of Science and Technology Policy
(OSTP) has issued a Blueprint for an AI Bill of Rights that’s intended to
serve as “a guide for a society that protects all people” from misuse of the
technology. ... As AI systems become more powerful and complex, businesses and
regulatory agencies face two formidable obstacles: The complexity of the
systems requires rule-making by technologists rather than politicians,
bureaucrats, and judges. The thorniest issues in AI governance involve
value-based decisions rather than purely technical ones.
The Role of AI in Cybersecurity: 5 Trends to Watch in 2025
The integration of AI into Software-as-As-Service (SaaS) platforms is changing
how businesses manage security. For example, AI-enhanced tools are helping
organizations automate threat detection, analyze vast data sets more
efficiently, and respond to breaches or incidents more quickly. However, this
innovation also introduces new risks such as hallucinations and an
over-reliance on potentially poor data quality, meaning AI-powered systems
need to be carefully configured to avoid outputs that mislead and are
disadvantageous to defenders. ... AI auditing tools will help
organizations assess whether AI models are making decisions based on biased or
discriminatory data – a concern that could lead to legal and reputational
challenges. As AI technology becomes more embedded in organizational
operations, ethical considerations must be at the forefront of AI governance
to help businesses avoid unintended consequences. Board members must be
proactive in understanding the implications of AI on data security and
ensuring that their companies are following best practices in AI governance
for compliance with evolving legislation. Without C-suite support and
understanding, and collaboration between executives and security teams,
organizations will be more vulnerable to the potential risks AI poses to data
and intellectual property.
Quote for the day:
"Leadership is about making others
better as a result of your presence and making sure that impact lasts in
your absence." -- Sheryl Sandberg
No comments:
Post a Comment