Quote for the day:
"The leadership team is the most important asset of the company and can be its worst liability" -- Med Jones
Inching towards AGI: How reasoning and deep research are expanding AI from statistical prediction to structured problem-solving

There are various scenarios that could emerge from the near-term arrival of
powerful AI. It is challenging and frightening that we do not really know how
this will go. New York Times columnist Ezra Klein addressed this in a recent
podcast: “We are rushing toward AGI without really understanding what that is or
what that means.” For example, he claims there is little critical thinking or
contingency planning going on around the implications and, for example, what
this would truly mean for employment. Of course, there is another perspective on
this uncertain future and lack of planning, as exemplified by Gary Marcus, who
believes deep learning generally (and LLMs specifically) will not lead to AGI.
Marcus issued what amounts to a take down of Klein’s position, citing notable
shortcomings in current AI technology and suggesting it is just as likely that
we are a long way from AGI. ... While each of these scenarios appears plausible,
it is discomforting that we really do not know which are the most likely,
especially since the timeline could be short. We can see early signs of each:
AI-driven automation increasing productivity, misinformation that spreads at
scale, eroding trust and concerns over disingenuous models that resist their
guardrails. Each scenario would cause its own adaptations for individuals,
businesses, governments and society.
AI in Network Observability: The Dawn of Network Intelligence

ML algorithms, trained on vast datasets of enriched, context-savvy network
telemetry, can now detect anomalies in real-time, predict potential outages,
foresee cost overruns, and even identify subtle performance degradations that
would otherwise go unnoticed. Imagine an AI that can predict a spike in
malicious traffic based on historical patterns and automatically trigger
mitigations to block the attack and prevent disruption. That’s a straightforward
example of the power of AI-driven observability, and it’s already possible
today. But AI’s role isn’t limited to number crunching. GenAI is revolutionizing
how we interact with network data. Natural language interfaces allow engineers
to ask questions like: “What’s causing latency on the East Coast?” and receive
concise, insightful answers. ... These aren’t your typical AI algorithms.
Agentic AI systems possess a degree of autonomy, allowing them to make decisions
and take actions within a defined framework. Think of them as digital network
engineers, initially assisting with basic tasks but constantly learning and
evolving, making them capable of handling routine assignments, troubleshooting
fundamental issues, or optimizing network configurations.
Edge Computing and the Burgeoning IoT Security Threat

A majority of IoT devices come with wide-open default security settings. The IoT
industry has been lax in setting and agreeing to device security standards.
Additionally, many IoT vendors are small shops that are more interested in
rushing their devices to market than in security standards. Another reason for
the minimal security settings on IoT devices is that IoT device makers expect
corporate IT teams to implement their own device settings. This occurs when IT
professionals -- normally part of the networking staff -- manually configure
each IoT device with security settings that conform with their enterprise
security guidelines. ... Most IoT devices are not enterprise-grade. They might
come with weak or outdated internal components that are vulnerable to security
breaches or contain sub-components with malicious code. Because IoT devices are
built to operate over various communication protocols, there is also an
ever-present risk that they aren't upgraded for the latest protocol security.
Given the large number of IoT devices from so many different sources, it's
difficult to execute a security upgrade across all platforms. ... Part of the
senior management education process should be gaining support from management
for a centralized RFP process for any new IT, including edge computing and
IoT.
Data Quality Metrics Best Practices
While accuracy, consistency, and timeliness are key data quality metrics, the
acceptable thresholds for these metrics to achieve passable data quality can
vary from one organization to another, depending on their specific needs and use
cases. There are a few other quality metrics, including integrity, relevance,
validity, and usability. Depending on the data landscape and use cases, data
teams can select the most appropriate quality dimensions to measure. ... Data
quality metrics and data quality dimensions are closely related, but aren’t the
same. The purpose, usage, and scope of both concepts vary too. Data quality
dimensions are attributes or characteristics that define data quality. On the
other hand, data quality metrics are values, percentages, or quantitative
measurements of how well the data meets the above characteristics. A good
analogy to explain the differences between data quality metrics and dimensions
would be the following: Consider data quality dimensions as talking about a
product’s attributes – it’s durable, long-lasting, or has a simple design. Then,
data quality metrics would be how much it weighs, how long it lasts, and the
like. ... Every solution starts with a problem. Identify the pressing concerns –
missing records, data inconsistencies, format errors, or old records. What is it
that you are trying to solve?
How to Modernize Legacy Systems with Microservices Architectures

Scalability and agility are two significant benefits of a microservices
architecture. With monolithic applications, it's difficult to isolate and scale
distinct application functions under variable loads. Even if a monolithic
application is scaled to meet increased demand, it could take months of time and
capital to reach the end goal. By then, the demand might have changed —or
disappear altogether — and the application will waste resources, bogging down
the larger operating system. ... microservices architectures make applications
more resilient. Because monolithic applications function on a single codebase, a
single error during an update or maintenance can create large-scale problems.
Microservices-based applications, however, work around this issue. Because each
function runs on its own codebase, it's easier to isolate and fix problems
without disrupting the rest of the application's services. ... Microservices
might seem like a one-size-fits-all, no-downsides approach to modernizing legacy
systems, but the first step to any major system migration is to understand the
pros and cons. No major project comes without challenges, and migrating to
microservices is no different. For instance, personnel might be resistant to
changes associated with microservices.
Elevating Employee Experience: Transforming Recognition with AI
AI’s ability to analyse patterns in behaviour, performance, and preferences
enables organisations to offer personalised recognition that resonates with
employees. AI-driven platforms provide real-time insights to leaders, ensuring
that appreciation is timely, equitable, and free from unconscious biases. ...
Burnout remains a critical challenge in today’s workplace, especially as
workloads intensify and hybrid models blur work-life boundaries. With 84% of
recognised employees being less likely to experience burnout, AI-driven
recognition programs offer a proactive approach to employee well-being. Candy
pointed out that AI can monitor engagement levels, detect early signs of
burnout, and prompt managers to step in with meaningful appreciation. By
tracking sentiment analysis, workload patterns, and feedback trends, AI helps HR
teams intervene before burnout escalates. “Recognition isn’t just about
celebrating big milestones; it’s about appreciating daily efforts that often go
unnoticed. AI helps ensure no contribution is left behind, reinforcing a culture
of continuous encouragement and support,” remarked Candy Fernandez. Arti Dua
expanded on this, explaining that AI can help create customised recognition
strategies that align with employees’ stress levels and work patterns, ensuring
appreciation is both timely and impactful.
11 surefire ways to fail with AI

“The fastest way to doom an AI initiative? Treat it as a tech project instead of
a business transformation,” Pallath says. “AI doesn’t function in isolation — it
thrives on human insight, trust, and collaboration.” The assumption that just
providing tools will automatically draw users is a costly myth, Pallath says.
“It has led to countless failed implementations where AI solutions sit unused,
misaligned with actual workflows, or met with skepticism,” he says. ... Without
a workforce that embraces AI, “achieving real business impact is challenging,”
says Sreekanth Menon, global leader of AI/ML at professional services and
solutions firm Genpact. “This necessitates leadership prioritizing a
digital-first culture and actively supporting employees through the transition.”
To ease employee concerns about AI, leaders should offer comprehensive AI
training across departments, Menon says. ... AI isn’t a one-time deployment.
“It’s a living system that demands constant monitoring, adaptation, and
optimization,” Searce’s Pallath says. “Yet, many organizations treat AI as a
plug-and-play tool, only to watch it become obsolete. Without dedicated teams to
maintain and refine models, AI quickly loses relevance, accuracy, and business
impact.” Market shifts, evolving customer behaviors, and regulatory changes can
turn a once-powerful AI tool into a liability, Pallath says.
Now Is the Time to Transform DevOps Security
Traditionally, security was often treated as an afterthought in the software
development process, typically placed at the end of the development cycle. This
approach worked when development timelines were longer, allowing enough time to
tackle security issues. As development speeds have increased, however, this
final security phase has become less feasible. Vulnerabilities that arise late
in the process now require urgent attention, often resulting in costly and
time-intensive fixes. Overlooking security in DevOps can lead to data breaches,
reputational damage, and financial loss. Delays increase the likelihood of
vulnerabilities being exploited. As a result, companies are rethinking how
security should be embedded into their development processes. ...
Significant challenges are associated with implementing robust security
practices within DevOps workflows. Development teams often resist security
automation because they worry it will slow delivery timelines. Meanwhile,
security teams get frustrated when developers bypass essential checks in the
name of speed. Overcoming these challenges requires more than just new tools and
processes. It's critical for organizations to foster genuine collaboration
between development and security teams by creating shared goals and
metrics.
AI development pipeline attacks expand CISOs’ software supply chain risk

Malicious software supply chain campaigns are targeting development
infrastructure and code used by developers of AI and large language model (LLM)
machine learning applications, the study also found. ... Modern software supply
chains rely heavily on open-source, third-party, and AI-generated code,
introducing risks beyond the control of software development teams. Better
controls over the software the industry builds and deploys are required,
according to ReversingLabs. “Traditional AppSec tools miss threats like malware
injection, dependency tampering, and cryptographic flaws,” said ReversingLabs’
chief trust officer Saša Zdjelar. “True security requires deep software
analysis, automated risk assessment, and continuous verification across the
entire development lifecycle.” ... “Staying on top of vulnerable and malicious
third-party code requires a comprehensive toolchain, including software
composition analysis (SCA) to identify known vulnerabilities in third-party
software components, container scanning to identify vulnerabilities in
third-party packages within containers, and malicious package threat
intelligence that flags compromised components,” Meyer said.
Data Governance as an Enabler — How BNY Builds Relationships and Upholds Trust in the AI Era
Governance is like bureaucracy. A lot of us grew up seeing it as something we
don’t naturally gravitate toward. It’s not something we want more of. But we
take a different view, governance is enabling. I’m responsible for data
governance at Bank of New York. We operate in a hundred jurisdictions, with
regulators and customers around the world. Our most vital equation is the trust
we build with the world around us, and governance is what ensures we uphold that
trust. Relationships are our top priority. What does that mean in practice? It
means understanding what data can be used for, whose data it is, where it should
reside, and when it needs to be obfuscated. It means ensuring data security.
What happens to data at rest? What about data in motion? How are entitlements
managed? It’s about defining a single source of truth, maintaining data quality,
and managing data incidents. All of that is governance. ... Our approach follows
a hub-and-spoke model. We have a strong central team managing enterprise assets,
but we've also appointed divisional data officers in each line of business to
oversee local data sets that drive their specific operations. These divisional
data officers report to the enterprise data office. However, they also have the
autonomy to support their business units in a decentralized manner.
No comments:
Post a Comment