Quote for the day:
"Don't watch the clock; do what it does.
Keep going." -- Sam Levenson

Architects are not super-human. Most learned to be good by failing miserably
dozens or hundreds of times. Many got the title handed to them. Many gave it
to themselves. Most come from spectacularly different backgrounds. Most have a
very different skill set. Most disagree with each other. ... When someone gets
online and says, ‘Real Architects’, I puke a little. There are no real
architects. Because there is no common definition of what that means. What
competencies should they have? How were those competencies measured and by
whom? Did the person who measured them have a working model by which to
compare their work? To make a real architect repeatedly, we have to get
together and agree what that means. Specifically. Repeatably. Over and over
and over again. Tens of thousands of times and learn from each one how to do
it better as a group! ... The competency model for a successful architect is
large, difficult to learn, and most of employers do not recognize or give you
opportunities to do it very often. They have defined their own internal model,
from ‘all architects are programmers’ to ‘all architects work with the CEO’.
The truth is simple. Study. Experiment. Ask tough questions. Simple answers
are not the answer. You do not have to be everything to everyone. Business
architects aren’t right, but neither are software architects.
Poor risk management can lead to liquidity shortfalls, and failure to maintain
adequate capital buffers can potentially result in insolvency and trigger
wider market disruptions. Weak practices also contribute to a build-up of
imbalances, such as lending booms, which unravel simultaneously across
institutions and contribute to widespread market distress. In addition, banks’
balance sheets and financial contracts are interconnected, meaning a failure
in one institution can quickly spread to others, amplifying systemic risk. ...
Poor risk controls and a lack of enforcement also encourage excessive moral
hazard and risk-taking behavior that exceed what a bank can safely manage,
undermining system stability. Homogeneous risk diversification can also be
costly and exacerbate systemic risk. When banks diversify risks in similar
ways, individual risk reduction paradoxically increases the probability of
simultaneous multiple failures. Fragmented regulation and inadequate risk
frameworks fail to address these systemic vulnerabilities, since persistent
weak risk management practices threaten the entire financial system. In
essence, weak risk management undermines individual bank stability, while the
interconnected and pro-cyclical nature of the banking system can trigger
cascading failures that escalate into systemic crises.

Of all the banks presenting, BofA was the most explicit in describing how it
is using various forms of artificial intelligence. Artificial intelligence
allows the bank to effectively change the work across more areas of its
operations than prior types of tech tools allowed, according to Brian
Moynihan, chair and CEO. The bank included a full-page graphic among its
presentation slides, the chart describing four "pillars," in Moynihan’s words,
where the bank is applying AI tools. ... While many banks have tended to stop
short of letting their use of GenAI touch customers directly, Synchrony has
introduced a tool for its customers when they want to shop for various
consumer items. It launched its pilot of Smart Search a year ago. Smart Search
provides a natural language hunt joined with GenAI. It is a joint effort of
the bank’s AI technology and product incubation teams. The functionality
permits shoppers using Synchrony’s Marketplace to enter a phrase or theme to
do with decorating and home furnishings. The AI presents shoppers with a
"handpicked" selection of products matching the information entered, all of
which are provided by merchant partners. ... Citizens is in the midst of its
"Reimagining the Bank," Van Saun explained. This entails rethinking and
redesigning how Citizens serves customers. He said Citizens is "talking with
lots of outside consultants looking at scenarios across all industries across
the planet in the banking industry."
By whatever name you call it, automated reasoning refers to algorithms that
search for statements or assertions about the world that can be verified as true
by using logic. The idea is that all knowledge is rigorously supported by what's
logically able to be asserted. As Cook put it, "Reasoning takes a model and lets
us talk accurately about all possible data it can produce." Cook gave a brief
snippet of code as an example that demonstrates how automated reasoning achieves
that rigorous validation. ... AWS has been using automated reasoning for a
decade now, said Cook, to achieve real-world tasks such as guaranteeing delivery
of AWS services according to SLAs, or verifying network security. Translating a
problem into terms that can be logically evaluated step by step, like the code
loop, is all that's needed. ... The future of automated reasoning is melding it
with generative AI, a synthesis referred to as neuro-symbolic. On the most basic
level, it's possible to translate from natural-language terms into formulas that
can be rigorously analyzed using logic by Zelkova. In that way, Gen AI can be a
way for a non-technical individual to frame their goal in informal, natural
language terms, and then have automated reasoning take that and implement it
rigorously. The two disciplines can be combined to give non-logicians access to
formal proofs, in other words.

Security culture is broadly defined as an organization's shared strategies,
policies, and perspectives that serve as the foundation for its enterprise
security program. For many years, infosec leaders have preached the importance
of a strong culture and how it cannot only strengthen the organization's
security posture but also spur increases in productivity and profitability.
Security culture has also been a focus in the aftermath of last year's scathing
Cyber Safety Review Board (CSRB) report on Microsoft, which stemmed from an
investigation into a high-profile breach of the software giant at the hands of
the Chinese nation-state threat group Storm-0558. The CSRB found "Microsoft's
security culture was inadequate and requires an overhaul," according to the
April 2024 report. Specifically, the CSRB board members flagged an overall
corporate culture at Microsoft that "deprioritized both enterprise security
investments and rigorous risk management." ... But security culture goes beyond
frameworks and executive structures; Herzog says leaders need to have the right
philosophies and approaches to create an effective, productive environment for
employees throughout the organization, not just those on the security team. ...
A big reason why a security culture is hard to build, according to Herzog, is
that many organizations are simply defining success incorrectly.
What set the Wiki system apart was its built-in intelligence to personalize the
experience based on user roles. Kashikar illustrated this with a use case: “If
I’m a marketing analyst, when I click on anything like cross-sell, upsell, or
new customer buying prediction, it understands I’m a marketing analyst, and it
will take me to the respective system and provide me the insights that are
available and accessible to my role.” This meant that marketing, engineering, or
sales professionals could each have tailored access to the insights most
relevant to them. Underlying the system were core principles that ensured the
program’s effectiveness, says Kaahikar. This includes information,
accessibility, and discoverability, and its integration with business processes
to make it actionable. ... AI has become a staple in business conversations
today, and Kashikar sees this growing interest as a positive sign of progress.
While this widespread awareness is a good starting point, he cautions that
focusing solely on models and technologies only scratches the surface, or can
provide a quick win. To move from quick wins to lasting impact, Kashikar
believes that data leaders must take on the role of integrators. He says, “The
data leaders need to consider themselves as facilitators or connectors where
they have to take a look at the entire ecosystem and how they leverage this
ecosystem to create the greatest business impact which is sustainable as
well.”

Security planning is heavily shaped by the location of a data center and its
proximity to critical utilities, connectivity, and supporting infrastructure.
“These factors can influence the reliability and resilience of data centers –
which then in turn will shift security and response protocols to ensure
continuous operations,” Saraiya says. In addition, rurality, crime rate, and
political stability of the region will all influence the robustness of security
architecture and protocols required. “Our thirst for information is not
abating,” JLL’s Farney says. “We’re doubling the amount of new information
created every four years. We need data centers to house this stuff. And that's
not going away.” John Gallagher, vice president at Viakoo, said all modern data
centers include perimeter security, access control, video surveillance, and
intrusion detection. ... “The mega-campuses being built in remote locations
require more intentionally developed security systems that build on what many
edge and modular deployments utilize,” Dunton says. She says remote monitoring
and AI-driven analytics allow centralized oversight with minimizing on-site
personnel, while compact, hardened enclosures with integrated access control,
surveillance, and environmental sensors Emphasis is also placed on tamper
detection, local alerting, and quick response escalation paths.
Attribution in cyberspace is incredibly complex because attackers use
compromised systems, VPNs, and sophisticated obfuscation techniques. Even with
high confidence, you could be wrong. Rather than operating in legal gray areas,
companies need to operate under legally binding agreements that allow security
researchers to test and secure systems within clearly defined parameters. That’s
far more effective than trying to exploit ambiguities that may not actually
exist when tested in court. ... Active defense, properly understood, involves
measures taken within your own network perimeter, like enhanced monitoring,
deception technologies like honeypots, and automated response systems that
isolate threats. These are defensive because they operate entirely within
systems you own and control. The moment you cross into someone else’s system,
even to retrieve your own stolen data, you’ve entered offensive territory. It
doesn’t matter if your intentions are defensive; the action itself is offensive.
Retaliation goes even further. It’s about causing harm in response to an attack.
This could be destroying the attacker’s infrastructure, exposing their
operations, or launching counter-attacks. This is pure vigilantism and has no
place in responsible cybersecurity. ... There’s also the escalation risk. That
“innocent” infrastructure might belong to a government entity, a major
corporation, or be considered critical infrastructure.

Data trust can be seen as data reliability in action. When you’re driving your
car, you trust that its speedometer is reliable. A driver who believes his
speedometer is inaccurate may alter the car’s speed to compensate unnecessarily.
Similarly, analysts who lose faith in the accuracy of the data powering their
models may attempt to tweak the models to adjust for anomalies that don’t exist.
Maximizing the value of a company’s data is possible only if the people
consuming the data trust the work done by the people developing their data
products. ... Understanding the importance of data trust is the first step in
implementing a program to build trust between the producers and consumers of the
data products your company relies on increasingly for its success. Once you know
the benefits and risks of making data trustworthy, the hard work of determining
the best way to realize, measure, and maintain data trust begins. Among the
goals of a data trust program are promoting the company’s privacy, security, and
ethics policies, including consent management and assessing the risks of sharing
data with third parties. The most crucial aspect of a data trust program is
convincing knowledge workers that they can trust AI-based tools. A study
released recently by Salesforce found that more than half of the global
knowledge workers it surveyed don’t trust the data that’s used to train AI
systems, and 56% find it difficult to extract the information they need from AI
systems.

A modern way of saying this is that questions are data. Leaders who want to
leverage this data should focus less on answering everyone’s questions
themselves and more on making it easy for the people they are talking to—their
employees—to access and help one another answer the questions that have the
biggest impact on the company’s overall purpose. For example, part of my work
with large companies is to help leaders map what questions their employees are
asking one another and analyze the group dynamics in their organization. This
gives leaders a way to identify critical problems and at the same time mobilize
the people who need to solve them. ... The key to changing the culture of an
organization is not to tell people what to do, but to make it easy for them to
ask the questions that make them consider their current behavior. Only by making
room for their colleagues, employees, and other stakeholders to ask their own
questions and activate their own experience and insights can leaders ensure that
people’s buy-in to new initiatives is an active choice, and thus something they
feel committed to acting on. ... The decision to trust the process of asking and
listening to other people’s questions is also a decision to think of questioning
as part of a social process—something we do to better understand ourselves and
the people surrounding us.
No comments:
Post a Comment