CIO Strategy for Mergers & Acquisitions
The success of merging of two organizations relies on multiple factors like,
economic certainties, accurate valuations, proper identification of targets,
strong due diligence processes and technology integration. However, the
prominent factor among all these is technology integration i.e. merging their IT
systems. The IT systems of each organization consists of a set of applications,
IT infrastructure, databases, licenses, technologies and their complexities.
After integration, one set of systems and their infrastructure becomes
redundant. Greater the amount of duplication, higher is the redundancy leading
to an increase in costs and complexity of an integration. The role of CIO and
Information Technology (IT) in M&A has become increasingly important, as the
need for quick turnaround time is the primary factor. The CIO’s need to be
involved during the deal preparation, assessment, and due diligence phase of
M&A. In addition, the CIO’s team needs to identify key IT processes, IT
risks, costs and synergies of the organization.
Eight countries jointly propose principles for mutual recognition of digital IDs
There are 11 principles in total, all contained in a report [PDF] about digital
identity in a COVID-19 environment, that the DIWG envisions would be used by all
governments when building digital identity frameworks. The principles are
openness, transparency, reusability, user-centricity, inclusion and
accessibility, multilingualism, security and privacy, technology neutrality and
data portability, administrative simplicity, preservation of information, and
effectiveness and efficiency. According to the DIWG, the principles aim to allow
for a common understanding to guide future discussions on both mutual
recognition and interoperability of digital identities and infrastructure. In
providing the principles, the DIWG noted that mutual recognition and
interoperability of digital identities between countries is still several years
away, with the group saying there are foundational activities that need to be
undertaken before it can be achieved. These foundational activities include
creating a definition of a common language and definitions across digital
identities, assessing and aligning respective legal and policy frameworks, and
creating interoperable technical models and infrastructure.
Joel Spolsky on Structuring the Web with the Block Protocol
The Block Protocol is not the first attempt, however, at bringing structure to
data presented on the web. The problem, says Spolsky, is that previous attempts
— such as Schema.org or Dublin Core — have included that structure as an
afterthought, as homework that could be left undone without any consequence to
the creator. At the same time, the primary benefit of doing that homework was
often to game search engine optimization (SEO) algorithms, rather than to
provide structured data to the web at large. Search engines quickly caught on to
that and began ignoring the content entirely, which led to web content creators
abandoning these attempts at structure. Spolsky said this led them to ask one
simple question: “What’s a way we can make it so that the web can be better
structured, in a way that’s actually easier to write for a web developer than if
they [had] left out the structure in the first place?” ... The basic building
blocks of the web — HTML and CSS — describe content and how it should be
displayed in a human-readable format, “but it doesn’t describe anything about
that type of data or what the data is or what it does,” said Spolsky.
Avoiding the Achilles Heel of Non-European Cybersecurity
US-based organizations are beholden to regulations such as the CLOUD Act and the
US PATRIOT Act, which pose a risk to data belonging to any other region. Any
application or solution built in the US — be it concerned with cybersecurity,
hosting or collaboration — is required to have a backdoor built-in, allowing
third parties to access the data within, often without the owner ever knowing —
particularly if they’re foreign. Moreover, on his last full day in office and
following the large-scale Solar Winds attack, former President Trump signed an
executive order decreeing that American IaaS cloud providers must keep a wealth
of sensitive information on their foreign clients — names, physical and email
addresses, national identification numbers, sources of payment, phone numbers
and IP addresses — in order to help US authorities track down cyber-criminals.
As these services include “destination” cloud networks, such as AWS, Microsoft
Azure, and Google Cloud, it impacts many citizens and companies
worldwide.
5 Questions for Evaluating DBMS Features and Capabilities
Among RDBMSs, both SQL Server and Snowflake use a kind of umbrella data type,
VARIANT, to store data of virtually any type. The labor-saving dimension of
typing is much less important here. For example, in the case of the VARIANT
type, the database must usually be told what to do with this data. The emphasis
in this definition of data type goes to the issue of convenience: BLOB and
similar types are primarily useful as a means to store data in the RDBMS
irrespective of the data’s structure. Google Cloud’s implementation of a JSON
“data type” in BigQuery ticks both these boxes. First, it is labor-saving, in
that BigQuery knows what to do with JSON data according to its type. Second, it
is convenient, in that it gives customers a means to preserve and perform
operations on data serialized in JSON objects. The implemenation permits an
organization to ingest JSON-formatted messages into the RDBMS (BigQuery) and to
preserve them intact. Access to raw JSON data could be valuable for future use
cases. It also makes it much easier for users to access and manipulate this
data
Digital payments: How banks can stave off fintech challengers
To safeguard their payments business, banks must pursue two main objectives:
replace their existing legacy systems and improve the payment services and
functionality they offer to retail and corporate customers. In this way, banks
can ensure that their provision of payment services remains intact. Some banks
have tried to solve this problem by acquiring a fintech challenger. Others have
sought to build their own technology from scratch – although this has been shown
to carry risks. However, one of the best options for banks is to find new
partners, both in terms of technology and services, which they can work with to
create a more loosely defined infrastructure for payment services. This in turn,
will help them to become more agile in the payments sphere, according to Frank.
“Banks like JP Morgan are a standard bearer here and commit huge sums to tech
investment annually,” says Frank. “The key is to target a more agile tech stack
both in terms of infrastructure – that is in terms of cloud adoption, enhanced
security, devices and networks, as well as applications – whether it is
delivered as a Software-as-a-Service (SaaS) or a white-labelled service.”
Cloud Data Management Disrupts Storage Silos and Team Silos Too
In the context of enterprise data storage, unstructured data management has been
a practice for many years, although it originated in storage vendor platforms.
Now that enterprises are using many different storage technologies — block
storage for database and virtualization, NAS for user and application workloads,
backup solutions in the data center or in the cloud — a storage-centric approach
to data management no longer fits the bill. That’s because, among other reasons,
storage vendor data management solutions don’t solve the problem of managing
silos of data stored on different platforms. Silos hamper visibility and
governance, leading to higher costs and poor utilization. As more workloads and
data move to the cloud to save money and enable flexibility and innovation,
cloud data management has become a growing practice. Cloud data management (CDM)
goes beyond storage to meet the ever-changing needs for data mobility and
access, cost management, security and, increasingly, data monetization.
Executive Q&A: Data Management and the Cloud
Understanding which type of cloud database is the right fit is often the
biggest challenge. It’s helpful to think of cloud-native databases as being in
one of two categories: platform-native systems (i.e., offerings by cloud
providers themselves) or in-cloud systems offered by third-party vendors.
Platform-native solutions include Azure Synapse, BigQuery, and Redshift. They
offer deep integration with the provider’s cloud. Because they are highly
optimized for their target infrastructure, they offer seamless and immediate
interoperability with other native services. Platform-native systems are a
great choice for enterprises that want to go all-in on a given cloud and are
looking for simplicity of deployment and interoperability. In addition, these
systems offer the considerable advantage of having to deal with a single
vendor only. In contrast, in-cloud systems tout cloud independence. This seems
like a great advantage at first. However, moving hundreds of terabytes between
clouds has its own challenges. In addition, customers inevitably end up using
other platform-native services that are only available on a given cloud, which
further reduces the perceived advantage of cloud independence.
The metaverse is a new word for an old idea
These are good conversations to have. But we would be remiss if we didn’t
take a step back to ask, not what the metaverse is or who will make it, but
where it comes from—both in a literal sense and also in the ideas it
embodies. Who invented it, if it was indeed invented? And what about earlier
constructed, imagined, augmented, or virtual worlds? What can they tell us
about how to enact the metaverse now, about its perils and its
possibilities? There is an easy seductiveness to stories that cast a
technology as brand-new, or at the very least that don’t belabor long,
complicated histories. Seen this way, the future is a space of reinvention
and possibility, rather than something intimately connected to our present
and our past. But histories are more than just backstories. They are
backbones and blueprints and maps to territories that have already been
traversed. Knowing the history of a technology, or the ideas it embodies,
can provide better questions, reveal potential pitfalls and lessons already
learned, and open a window onto the lives of those who learned
them.
Slow Down !! Cloud is Not for Everyone
“Most often It’s not the main course but Desserts that bloat your Bill” In
the cloud, it’s not only the cost of compute and memory, but the cost of
lock-in. Assume you have an on-prem license of a database enterprise edition
that couldn’t be ported to the cloud (incompatibility or contractual
complications or much higher cloud licenses) and you opt to move into a
native DB offered by your chosen cloud provider. What might appear as
straight-cut migration efforts is basically a much deeper trap of locking
you in with your cloud vendor. As the first step, you need to train your
workforce; then slowly, you will be mandated to rewrite or replace all the
homegrown and/or SAS features of your product to be compatible with the new
service. These efforts are something that was never part of your earlier
plan but now has become a critical necessity to keep the lights on. Say
after a certain period when you realize the cloud service is not a great fit
and you decide to shift back or move-on to a better alternate there comes
the insidious lock-in effect. They make such onward movement particularly
difficult – you need to burn significant dollars to migrate out.
Quote for the day:
"When people talk, listen
completely. Most people never listen." -- Ernest Hemingway
No comments:
Post a Comment