Four Things to Do if Your Bank is Eyeing Digital Assets
The evolution of money toward digital assets is affecting bank and fintech
organizations globally. Companies should proactively think through adjustments
now that will enable them to keep up with this rapid pace of change. At the
start of this century, when mobile banking apps first began appearing and banks
started offering remote deposit captures for checks, organizations that were
slow to adopt these technologies wound up being left behind. The OCC guidance
explicitly authorizing the use of digital assets should alleviate any doubts
around whether such currencies will be a major disruption. ... A crucial
determinant in how successful a bank will be in deploying digital asset-related
services is how well-equipped and properly aligned its technology platforms,
vendors, policies and procedures are. One of the primary concerns for
traditional banks will be assessing their existing core banking platform; many
leading vendors do not have blockchain and digital asset capabilities available
at this time. This type of readiness is key if bank management hopes to avoid
significant technology debt into the next decade.
How do Decision Trees and Random Forests Work?
There are two types of decision trees: classification and regression. A
classification tree predicts the category of a categoric dependent variable —
yes/no, apple/orange, died/survived, etc. A regression tree predicts the value
of a numeric variable, similar to linear regression. The thing to watch out for
with regression trees is that they can not extrapolate outside of the range of
the training dataset like linear regression can. However, regression trees can
use categoric input variables directly, unlike linear regression. While the
Titanic decision tree shows binary splits (each non-leaf node produces two child
nodes), this is not a general requirement. Depending on the decision tree, nodes
may have three or even more child nodes. I’m going to focus on classification
decision trees for the rest of this article, but the basic idea is the same for
regression trees as for classification trees. Finally, I’ll mention that this
discussion assumes the use of the rpart() function in R. I’ve heard that Python
can’t handle categoric variables directly, but I’m much less familiar with
Python, especially for data analysis. I believe that the basic theory is the
same, but the implementation is different.
Why financial-services firms need to change with the times
Rapidly evolving technology, regulatory constraints, and relentless pressure to
hit short-term financial targets may be hindering firms from making needed
investments to upskill their employees. These employees also face critical
skills gaps in areas such as empathy, resilience, adaptability, and creative
problem-solving. Turnover is a factor as well — firms may resist investing in
bespoke training initiatives that increase the market value of their people, who
then leave and take their enhanced skills profile with them. Such programs are
expensive and have an uncertain ROI. ... The challenge to upskill so many people
is so significant that firms may not be able to solve it by working
independently — though many have started that journey. For example, in 2017,
Citigroup announced a partnership with Cornell Tech to develop digital talent in
the New York City labor market. But a market-based, go-it-alone approach may be
too slow, or risk leaving small firms behind. It behooves industry-wide
associations and trade groups to create the right foundation to help all firms
in a country to close the skills gap, leading to faster progress at a sector
level.
The Rise and Rise of Digital Banking: How Fintech is Set to Disrupt Brick and Mortar Banking
Industry insiders have long been concerned about the role fintech have been
playing in the world of banking and whether or not they will ultimately replace
traditional financial institutions. This fear was exacerbated by the recent
introduction of the People’s Bank of China Fintech Development Plan which looked
to accelerate the accommodation of digital financial services in the country.
But could fintechs actually spell the end of traditional banking? To address
this properly, let’s address what finance actually is. The purpose of finance is
to realise the optimal distribution of capital across time and space amid
uncertainties and to serve the real economy and maximise social utility. One big
barrier to this can be found in adverse selection through a lack of information
and the emergence of ethical issues. Finance should exist to identify and price
risks. All technologies that are developed should be intent on helping to better
understand customers and their willingness, and ability, to pay – while pricing
them accurately. With this in mind, traditional banks have an advantage in terms
of capital costs, while fintechs are competitive in terms of operating costs.
Quantum computing could be useful faster than anyone expected
For most scientists, a quantum computer that can solve large-scale business
problems is still a prospect that belongs to the distant future, and one that
won't be realized for at least another decade. But now researchers from US
banking giant Goldman Sachs and quantum computing company QC Ware have designed
new quantum algorithms that they say could significantly boost the efficiency of
some critical financial operations – on hardware that might be available in only
five years' time. Rather than waiting for a fully-fledged quantum computer,
bankers could start running the new algorithms on near-term quantum hardware and
reap the benefits of the technology even while quantum devices remain immature.
Goldman Sachs has, for many years, been digging into the potential that quantum
technologies have to disrupt the financial sector. In particular, the bank's
researchers have explored ways to use quantum computing to optimize what is
known as Monte Carlo simulations, which consist of pricing financial assets
based on how the price of other related assets change over time, and therefore
accounting for the risk that is inherent to different options, stocks,
currencies and commodities.
Cloud Native and Kubernetes Observability: Expert Panel
The concept of observability is really agnostic to where you’re running your
workload, but the added complexity of multi-tenancy, cloud-native workloads, and
containerization lead to a rising need for observability. Single-tenant
monoliths can be easier to make observable because all the functionality is
right there, but as you add more services and users there’s a chance that a bug
will only manifest for one particular combination of services, versions of those
services, and user traffic patterns. The most important thing to be aware of is
when you’re about to grow your previous solutions, and to be proactive about
adding the right instrumentation and analysis frameworks to achieve
observability before it’s too late. When you stop being able to understand the
blast radius each change will have, and when you stop being able to answer the
questions you have about your system because the underlying data has been
aggregated away…that’s the point at which it’s too late. So be proactive and
invest early in observability to both improve developer productivity and
decrease downtime.
How To Take Full Advantage Of GPUs In Large Language Models
Typically, training models use weak scaling approaches and distributed data
parallelism to scale training batch size with a number of GPUs. Though this
approach allows the model to train on larger datasets, it comes with a
trade-off; all parameters must fit on a single GPU. This is where parallelism
comes into picture. Model parallel training overcomes this limitation as it
partitions the model across multiple GPUs. Previously, general purpose model
parallel frameworks such as GPipe and Mesh-TensorFlow have been proposed for the
same purpose. While gPipe divides groups of layers across different processors,
Mesh-TensorFlow employs intra-layer model parallelism. Other methods of model
parallelism such as tensor and pipeline parallelism have been proposed too.
Unfortunately, wrote the researchers at NVIDIA, naive usage leads to fundamental
scaling issues at thousands of GPUs. Expensive cross-node communication or idle
periods waiting on other devices are few reasons. Moreover, the high number of
compute operations required can result in unrealistically long training times
without model parallelism.
Optimal Feature Discovery: Better, Leaner Machine Learning Models Through Information Theory
From the perspective of information theory, both the prediction target and the
features in a model are random variables, and it’s possible to quantify in bits
the amount of information provided about the target by one or more features. One
important concept is relevance, a measure of how much information we expect to
gain about the target by observing the value of the feature. Another important
concept is redundance, a measure of how much information is shared between one
feature and another. Going back to the coin flip example, there could be
different ways to obtain information about the bias of the coin. We could have
access to a feature that tells us the rate of heads based on the design of the
coin, or we could build a profile feature that tracks the number of heads and
tails, historically. Both features are equally relevant in that they provide
equal amounts of information, but observing both features doesn’t give us more
information than observing either one, hence they are mutually redundant.
There’s a revolution coming in voice profiling and the warning signs are loud and clear
When conducting research for my forthcoming book, The Voice Catchers: How
Marketers Listen In to Exploit Your Feelings, Your Privacy, and Your Wallet, I
went through over 1,000 trade magazine and news articles on the companies
connected to various forms of voice profiling. I examined hundreds of pages of
US and EU laws applying to biometric surveillance. I analysed dozens of patents.
And because so much about this industry is evolving, I spoke to 43 people who
are working to shape it. It soon became clear to me that we are in the early
stages of a voice-profiling revolution that companies see as integral to the
future of marketing. Thanks to the public’s embrace of smart speakers,
intelligent car displays and voice-responsive phones – along with the rise of
voice intelligence in call centres – marketers say they are on the verge of
being able to use AI-assisted vocal analysis technology to achieve unprecedented
insights into shoppers’ identities and inclinations. In doing so, they believe
they will be able to circumvent the errors and fraud associated with traditional
targeted advertising.
Linux Foundation launches open source agriculture infrastructure project
The Linux Foundation has lifted the lid on a new open source digital
infrastructure project aimed at the agriculture industry. The AgStack
Foundation, as the new project will be known, is designed to foster
collaboration among all key stakeholders in the global agriculture space,
spanning private business, governments, and academia. As with just about every
other industry in recent years, there has been a growing digital transformation
across the agriculture sector that has ushered in new connected devices for
farmers and myriad AI and automated tools to optimize crop growth and circumvent
critical obstacles, such as labor shortages. Open source technologies bring the
added benefit of data and tools that any party can reuse for free, lowering the
barrier to entry and helping keep companies from getting locked into proprietary
software operated by a handful of big players. ... The AgStack Foundation will
be focused on supporting the creation and maintenance of free and
sector-specific digital infrastructure for both applications and the associated
data.
Quote for the day:
"Leadership appears to be the art of
getting others to want to do something you are convinced should be done." --
Vance Packard
No comments:
Post a Comment