How Can Financial Institutions Prepare for AI Risks?
In exploring the potential risks of AI, the paper provided “a standardized
practical categorization” of risks related to data, AI and machine learning
attacks, testing, trust, and compliance. Robust governance frameworks must focus
on definitions, inventory, policies and standards, and controls, the authors
noted. Those governance approaches must also address the potential for AI to
present privacy issues and potentially discriminatory or unfair outcomes “if not
implemented with appropriate care.” In designing their AI governance mechanisms,
financial institutions must begin by identifying the settings where AI cannot
replace humans. “Unlike humans, AI systems lack the judgment and context for
many of the environments in which they are deployed,” the paper stated. “In most
cases, it is not possible to train the AI system on all possible scenarios and
data.” Hurdles such as the “lack of context, judgment, and overall learning
limitations” would inform approaches to risk mitigation, the authors added. Poor
data quality and the potential for machine learning/AI attacks are other risks
financial institutions must factor in.
How to turn everyday stress into ‘optimal stress’
What triggers a stress response in one person may hardly register with another.
Some people feel stressed and become aggressive, while others withdraw.
Likewise, our methods of recovery are also unique—riding a bike, for instance,
versus reading a book. Executives, however, aren’t usually aware of their
stress-related patterns and idiosyncrasies and often don’t realize the extent of
the stress burden they are already carrying. Leadership stereotypes don’t help
with this. It’s no surprise that we can’t articulate how stress affects us when
we equate success with pushing boundaries to excess, fighting through problems,
and never admitting weakness. Many people we know can speak in detail about a
favorite vacation but get tongue-tied when asked what interactions consistently
trigger stress for them, or what time of day they feel most energized. To reach
optimal stress, we need to be conscious of our stress; in neurological terms,
it’s the first step toward lasting behavior change. As the psychiatrist and
author Daniel Siegel writes, “Where attention goes, neural firing flows and
neural connection grows.”9 And it is these newly grown neurological pathways
that define our behavior and result in new habits.
How to Empower Transformation and Create ROI with Intelligent Automation
CIOs see ROI delivered in multiple ways. For example, a recent Forrester study
identified that Bizagi’s platform offered 288% financial returns. CIOs seek
benefits other than cost savings, such as increased net promoter scores,
realized upsell opportunities, and improved end-user productivity gains. ...
Only that automation sets a very high bar on what machines can perform reliably,
especially when employees often interpret automation to mean “without any human
involvement.” For example, you can automate many steps in a loan application and
its approval processes when the applicant checks all the right boxes. However,
most financial transactions have complex exceptions and actions that require
orchestration across multiple systems. Managers and employees know the daily
complications and oversimplifying their jobs with only rudimentary automations
often leads to a backlash from vocal detractors. That’s why CIOs and IT leaders
need more than simple task automation, departmental applications, or one-off
data analysis. Digital leaders recognize the importance of intelligence and
orchestration to modernize workflows, meet customer expectations, leverage
machine learning capabilities, and enable implementing of the required business
rules.
Understand Bayes’ Theorem Through Visualization
Before going to any definition, normally Bayes’ Theorem are used when we have a
hypothesis and we have observed some evidence and we would like to know the
probability of the hypothesis holds given that the said evidence is true. Now it
may sound a bit confusing, let’s use the above visualization for a better
explanation. In the example, we want to know the probability of selecting the
female engineer given who has finished Ph.D. education. The first thing we need
is the probability of selecting the female engineer from the population without
considering any evidence. The term P(H) is called “prior”. ... As we know Bayes’
theorem is branching from Bayesian statistics, which relies on subjective
probabilities and uses Bayes’ theorem to update the knowledge and beliefs
regarding the events and quantities of interest based on data. Hence, based on
some knowledge, we can draw some initial inferences on the system (“prior” in
Bayes) and then “update” these inferences based on the data and new data to
obtain the “posterior”. Moreover, there are terms like Bayesian inference and
frequentist statistical inference, which is not covered in this
article.
Leveraging Geolocation Data for Machine Learning: Essential Techniques
Fortunately, we don’t have to worry about parsing these different formats and
manipulating low-level data structures. We can use the wonderful GeoPandas
library in Python that makes all this very easy for us. It is built on top of
Pandas, so all of the powerful features of Pandas are already available to you.
It works with GeoDataFrames and GeoSeries which are “spatially-aware” versions
of Pandas DataFrames and Series objects. It provides a number of additional
methods and attributes that can be used to operate on geodata within a
DataFrame. A GeoDataFrame is nothing but a regular Pandas DataFrame with an
extra ‘geometry’ column for every row that captures the location data. Geopandas
can also conveniently load geospatial data from all of these different geo file
formats into a GeoDataFrame with a single command. We can perform operations on
this GeoDataFrame in the same way regardless of the source format. This
abstracts away all of the differences between these formats and their data
structures.
Why Probability Theory is Hard
First, probability theorists don’t even agree what probability is or how to
think about it. While there is broad consensus about certain classes of problems
involving coins, dice, coloured balls in perfectly mixed bags and lottery
tickets, as soon as we move into practical probability problems with more
vaguely defined spaces of outcome, we are served with an ontological omelette of
frequentism, Bayesianism, Kolmogorov axioms, Cox’s theory, subjective,
objective, outcome spaces and propositional credences. Even if the probationary
probability theorist is eventually indoctrinated (by choice or by accident of
course instructor) into one or other school, none of these frameworks is
conceptually easy to access. Small wonder that so much probabilistic pedagogy is
boiled down to methodological rote learning and rules of thumb. There’s more.
Probability theory is often not taught very well. The notation can be confusing;
and don’t get me started on measure theory. The good news is that in terms of
practical applications, very little can get you a very long way.
Open-source, cloud-native projects: 5 key questions to assess risk
Another important indicator of risk relates to who owns or controls an
open-source project. From a risk perspective, projects with neutral governance,
where decisions are made by people from a variety of different companies,
present a lower risk. The lowest-risk projects are ones that fall under
vendor-neutral foundations. Kubernetes has been successful in part because
it is shepherded by the Cloud Native Computing Foundation (CNCF). Putting
Kubernetes into a neutral foundation provided a level playing field where people
from different companies could work together as equals, to create something that
benefits the entire ecosystem. The CNCF focuses on helping cloud-native projects
set themselves up to be successful with resource documents, maintainer sessions,
and help with various administrative tasks. In contrast, open-source projects
controlled by a single company have higher risk because they operate at the
whims of that company. Outside contributors have little recourse if that company
decides to go in a direction that doesn't align with the expectations of the
community's other participants. This can manifest as licensing changes, forks,
or other governance issues within a project.
Interpreted vs. compiled languages: What's the difference?
In contrast to compiled languages, interpreted languages generate an
intermediary instruction set that is not recognizable as source code. The
intermediary is not architecture specific as machine code, either. The Java
language calls this intermediary form bytecode. This intermediary deployment
artifact is platform agnostic, which means it can run anywhere. But one caveat
is that each runtime environment needs to have a preinstalled interpreter. The
interpreter converts the intermediary code into machine code at runtime. The
Java virtual machine (JVM) is the required interpreter that must be installed in
any target environment in order for applications packaged and deployed as
bytecode to run. The benefit of applications built with an interpreted language
is that they can run on any environment. In fact, one of the mantras of the Java
language when it was first released was "write once, run anywhere," as Java apps
were not tied to any one OS or architecture. The drawback to an interpreted
language is that the interpretation step consumes additional clock cycles,
especially in comparison to applications packaged and deployed as machine
code.
Disrupting the disruptors: Business building for banks
The strategic target of a new build should be nothing less than radical
disruption. Banks should aim not only to expand their own core offerings but
also to create a unique combination of products and functionality that will
disrupt the market. Successful new launches come with a clear sense of mission
and direction, as well as a road map to profitability (see sidebar “Successful
business builders are realistic about the journey”). One regional digital
attacker in Asia targeted merchant acquiring and developed a network with more
than 700,000 merchants. In just four months, it created a product with the
capacity to process payments through QR codes at the point-of-sale systems of
the two main merchant acquirers in the region and to transfer money between
personal accounts. In another case, an incumbent bank launched a
state-of-the-art digital solution in just ten months. In China, a leading
global bank launched a digital-hybrid business that focuses on financial
planning and uses social media to connect with customers. A midsize Asian
bank, meanwhile, launched an ecosystem of services for the digital-savvy mass
and mass-affluent segment, aimed at making it easier for customers to manage
their financial lives.
9 Trends That Are Influencing the Adoption of Devops and Devsecops
Despite the challenges of adopting these approaches, the potential gains to be made are generally seen as justifying this risk. For most development teams, this will first mean moving to a DevOps process, and then later evolving DevOps into DevSecOps. Beyond the operational gains that can be made during this transition lie a number of other advantages. One of the often overlooked effects of just how widespread DevOps has become is that, for many developers, it has become the default way of working. According to open source contributor and DevOps expert Barbara Ericson of Cloud Defense, “DevOps has suddenly become so ubiquitous in software engineering circles that you’ll be forgiven if you failed to realize the term didn’t exist until 2009...DevOps extends beyond the tools and best practices needed to accomplish its implementation. The successful introduction of DevOps demands a change in culture and mindset.” This trend is only likely to continue in the future, and could make it difficult for firms to hire talented developers if they are lagging behind on their own transition to DevOps.
Quote for the day:
"Leadership is about being a servant
first." -- Allen West
No comments:
Post a Comment