Quote for the day:
"Don't worry about being successful but work toward being significant and the success will naturally follow." -- Oprah Winfrey
5 key agenticops practices to start building now
“AI agents in production need a different playbook because, unlike traditional
apps, their outputs vary, so teams must track outcomes like containment, cost
per action, and escalation rates, not just uptime,” says Rajeev Butani, chairman
and CEO of MediaMint. ... Architects, devops engineers, and security leaders
should collaborate on standards for IAM and digital certificates for the initial
rollout of AI agents. But expect capabilities to evolve, especially as the
number of AI agents scales. As the agent workforce grows, specialized tools and
configurations may be needed. ... Devops teams will need to define the minimally
required configurations and standards for platform engineering, observability,
and monitoring for the first AI agents deployed to production. Then, teams
should monitor their vendor capabilities and review new tools as AI agent
development becomes mainstream. ... Select tools and train SREs on the concepts
of data lineage, provenance, and data quality. These areas will be critical to
up-skilling IT operations to support incident and problem management related to
AI agents. ... Leaders should define a holistic model of operational metrics for
AI agents, which can be implemented using third-party agents from SaaS vendors
and proprietary ones developed in-house. ... ser feedback is essential
operational data that shouldn’t be left out of scope in AIops and incident
management. This data not only helps to resolve issues with AI agents, but is
critical for feeding back into AI agent language and reasoning models.The great AI hype correction of 2025
The pendulum from hype to anti-hype can swing too far. It would be rash to dismiss this technology just because it has been oversold. The knee-jerk response when AI fails to live up to its hype is to say that progress has hit a wall. But that misunderstands how research and innovation in tech work. Progress has always moved in fits and starts. There are ways over, around, and under walls. Take a step back from the GPT-5 launch. It came hot on the heels of a series of remarkable models that OpenAI had shipped in the previous months, including o1 and o3 (first-of-their-kind reasoning models that introduced the industry to a whole new paradigm) and Sora 2, which raised the bar for video generation once again. That doesn’t sound like hitting a wall to me. ... Even an AGI evangelist like Ilya Sutskever, chief scientist and cofounder at the AI startup Safe Superintelligence and former chief scientist and cofounder at OpenAI, now highlights the limitations of LLMs, a technology he had a huge hand in creating. LLMs are very good at learning how to do a lot of specific tasks, but they do not seem to learn the principles behind those tasks, Sutskever said in an interview with Dwarkesh Patel in November. It’s the difference between learning how to solve a thousand different algebra problems and learning how to solve any algebra problem. “The thing which I think is the most fundamental is that these models somehow just generalize dramatically worse than people,” Sutskever said.The future of responsible AI: Balancing innovation with ethics
Trust begins with explainability. When teams understand the reasons for a
model’s behavior — the reasons behind a certain code being generated, a certain
test being selected, a certain dataset being prioritized — they can validate it
and fix it. Explainability matters to customers as well. Research shows that
when customers are clear on when and how AI is influencing decisions, they trust
the brand more. This does not require sharing the proprietary model
architectures; it simply requires transparency around AI in the flow of the
decision making. Another emerging pillar of trust is the responsible use of
synthetic data. In sensitive privacy environments, companies are generating
domain specific synthetic datasets for experimentation. The LLM (large language
model) powered agents can be used in multi-agent pipelines to filter the outputs
for regulatory compliance, thematic compliance and accuracy of structure — all
of which help teams train/fine-tune the model without compromising data
privacy. ... Responsible AI is no longer just the last step in the
workflow. It’s becoming a blueprint for how teams build it, release it, and
iterate on it. The future will belong to organizations that think of
responsibility as a design choice, not a compliance checkbox. The goal is the
same whether it’s about using synthetic data safely, validating generative code,
or raising overall explainability in workflows: to create AI systems that people
trust and that teams can depend on. Thriving in the unknown future
To navigate this successfully, we understood that our first challenge was one
of mindset. How could we maintain agility of thinking and resilience, while
also meeting our customers anticipated needs of a specific defined product on
target deadlines? Since a core of our offering is technological excellence,
which ensures unmatched data accuracy, depth of insight and business
predictions, how could we insist on this high level of authority, with the
swirling changes all around us? We approach our work from a new point of view,
and with a great deal of curiosity and imagination. ... With all the hype
around AI, it is easy for our customers and our organizations to expect it to
achieve… everything. But, as professionals building these tools, we know this
is not the case. Many internal stakeholders and customers might not understand
the difference between predictive analytics, machine learning, and generative
AI, leading to misaligned expectations. ... Although our product, R&D,
data science, project management and customer success teams are each
independent, we work cross functionally to foster the ability for swift action
and change, when needed. Engineers, data scientists and product managers work
together for holistic problem-solving. These collaborations are less
formalized, instituted per project or issue, so colleagues feel free to turn
to each other for assistance and still can remain focused on individual
projects. Tokenization takes the lead in the fight for data security
Because tokenization preserves the structure and ordinality of the original
data, it can still be used for modeling and analytics, turning protection into
a business enabler. Take private health data governed by HIPAA for example:
tokenization means that data canbeused to build pricing models or for gene
therapy research, while remaining compliant. "If your data is already
protected, you can then proliferate the usage of data across the entire
enterprise and have everybody creating more and more value out of the data,"
Raghu said. "Conversely, if you don’t have that, there’s a lot of reticence
for enterprises today to have more people access it, or have more and more AI
agents access their data. Ironically, they’re limiting the blast radius of
innovation. The tokenization impact is massive, and there are many metrics you
could use to measure that – operational impact, revenue impact, and obviously
the peace of mind from a security standpoint." ... While conventional
tokenization methods can involve some complexity and slow down operations,
Databolt seamlessly integrates with encrypted data warehouses, allowing
businesses to maintain robust security without slowing performance or
operations. Tokenization occurs in the customer’s environment, removing the
need to communicate with an external network to perform tokenization
operations, which can also slow performance.Enterprises to prioritize infrastructure modernization in 2026
The rise of AI has heightened the importance of IT modernization, as many
organizations are still reliant on outdated, legacy infrastructure that is
ill-equipped to handle modern workload requirements, says tech solutions
provider World Wide Technologies (WWT). ... A move to modernize data center
infrastructure has many organizations are looking at private cloud models,
according to the WWT report: “The drive toward private cloud is fueled by
several needs, with one primary driver being greater data security and
privacy. Industries like finance and government, which handle sensitive
information, often find private cloud architectures better suited for meeting
strict compliance requirements. ... There is also a move to build up network
and compute abilities at the edge, Anderson noted. “Customers are not going to
be able to home run all that AI data to their data center and in real time get
the answers they need. They will have to have edge compute, and to make that
happen, it’s going to be agents sitting out there that are talking to other
agents in your central cluster. It’s going to be a very, distributed hybrid
architecture, and that will require a very high speed network,” Anderson said.
... Such modernization needs to take into consideration power and cooling
needs much more than ever, Anderson said. “Most of our customers are not
sitting there with a lot of excess data center power; rather, most people are
out of power or need to be doing more power projects to prepare for the near
future,” he said.How researchers are teaching AI agents to ask for permission the right way
Under permissioning appeared mostly with highly sensitive information. Social
Security numbers, bank account details, and child names fell into this
category. Participants withheld Social Security numbers almost half the time,
even in tasks where the number would be necessary. The researchers noted that
people often stayed cautious when the data touched on financial or identity
related matters. This tension between convenience and caution opens the door
to new risks when such systems move from controlled studies into production
environments. Brian Sathianathan, CTO at Iterate.ai, said the risk extends far
beyond the model itself. “Arguably the biggest vulnerability isn’t so much the
permission system itself but the infrastructure that it all runs on. ...
Accuracy alone will not solve security concerns in sensitive fields.
Sathianathan said organizations need to treat permission inference as
protected infrastructure. “Mitigation here, in practice, means running
permission inference behind your firewall and on your hardware. You should
treat it like your SIEM where things are isolated, auditable, and never
outsourced to shared infrastructure. You can’t let the permission system learn
from unvetted data.” ... “The paper shows that collaborative filtering can
predict user preferences with high accuracy, which is good, but the challenge
for regulated industries is more in ensuring that compliance requirements take
precedence over learned patterns even when users would prefer otherwise.”Bank Tech Planning 2026: What’s Real and What’s Hype?
Cybersecurity issues underpin every aspect of modern banking. With digital
channels, cloud platforms and open APIs, financial institutions are exposed to
increasingly sophisticated attacks, including ransomware, phishing and
systemic fraud. Strong cybersecurity frameworks protect customer data, ensure
regulatory compliance, and maintain operational continuity. ... Legacy core
systems constrain banks’ ability to innovate, integrate with partners, and
scale efficiently. Cloud-native or hybrid-core architectures provide
flexibility, reduce maintenance burdens, and accelerate product delivery. By
decoupling core functions from hardware limitations, banks gain resilience and
the agility to respond quickly to market changes. ... Real-time payment
infrastructure allows immediate settlement of transactions, eliminating delays
inherent in batch processing. This capability is critical for consumer
expectations, B2B cash flow, and operational efficiency. It also supports
modern business needs, such as instant payroll, vendor disbursement, and
high-frequency transfers. ,,, Modern banks rely on consolidated data platforms
and advanced analytics to make timely, informed decisions. Predictive
modeling, fraud detection and customer insights depend on high-quality,
integrated data. Analytics also enables proactive risk management, operational
efficiency and personalized customer experiences.Are You a Modern Professional?
An overreliance on tech that would crimp professional development and lead to
job losses. As well as holding AI to a higher ROI. “More than 90% of
professionals said they believe computers should be held to higher standards
of accuracy than humans,” the report notes. “About 40% said AI outputs would
need to be 100% accurate before they could be used without human review,
meaning that it’s still critical that humans continue to review AI-generated
outputs.” ... Professionals are involved across the AI landscape—as
developers, providers, deployers and users—as defined by the EU AI Act. “While
this provides opportunities, it also exposes professionals to risks at every
stage—from biases, hallucinations, dependencies, misuse and more,” notes Dr
Florence G’Sell, professor of private law at the Cyber Policy Center at
Stanford University. “Opacity complicates the situation, as it makes assessing
model performance difficult. To mitigate these risks, organizations could seek
independent external assessment. But developers are reluctant to provide
auditors access to data sources, model weights and code. This limits the
ability to evaluate and ensure compliance with responsible AI principles.” ...
Uncertain regulatory issues are already taking a toll on professionals, with
more than 60% of enterprises in the Asia-Pacific experiencing moderate to
significant disruption to their IT operations. Why The Ability To Focus Will Be Crucial For Future Leaders
Focus has become a fundamental value, as noise and excess have taken over our
daily routines. Every notification, interruption or sense of urgency activates
our brain’s alert system, diverting energy from the prefrontal cortex, the
region responsible for decision making, planning and strategic thinking. In
the process, strategic vision gives way to the micro decisions of the
day-to-day. This is what some neuroscientists call a "fragmented attention"
state, in which the brain reacts more than it creates. For leaders, this means
you become reactive rather than innovative. ... Leaders who learn to regulate
their own mental operating system can gain a decisive advantage and the
ability to sustain clarity amid chaos. You can start with intentional pauses
throughout the day—simple practices such as deep breathing, brief walks or
moments of silence. Equally important is noticing when your mind drifts and
deliberately working to bring it back. ... Modern leaders often overvalue
expression and undervalue absorption. Yet, from a neurobiological standpoint,
silence is not the absence of thought; it’s the synchronization of neural
rhythms. One study found that periods of intentional quiet—no input, no
analysis, no output—can activate the prefrontal cortex and strengthen the
brain’s capacity for integration. Put another way: The mind reorganizes
fragments into coherence only when it’s not forced to produce. In a culture
addicted to immediacy, mental silence, time to recover and intentional breaks
become a competitive advantage.
No comments:
Post a Comment