
Quantum computing has always been a potential threat to cryptocurrencies ever
since the birth of Bitcoin and its contemporaries. Before we can understand why?
Let’s dive into how cryptocurrency transactions work. Let’s take Bitcoin as an
example. Bitcoin is a decentralized peer-to-peer system for transferring value.
This means that unlike traditional financial institutions that mediate a lot of
the processes, Bitcoin users facilitate themselves, such as creating their own
addresses. By means of complex algorithms, users calculate a random private key
and public address to perform transactions and keep them secure. At the moment,
cryptographically secure private keys serve as the only protection for users’
funds. If quantum computers cracked the encryption safeguarding the private
keys, then it would likely mark the end for bitcoin and all cryptocurrencies.
Not like we want to be the bearer of bad news, but there are already multiple
quantum shortcuts that bypass public-key cryptography. Examples like Shor’s
algorithm enable the extraction of private keys from any public key.
Stalwart IBM is now offering an EDB Postgres solution. IBM’s Villalobos thinks
it’s as much an enterprise modernization requirement as anything else that’s
driving demand for the open-source, cloud-oriented product. He thinks customers
should use IBM’s Rocket cloud (methodology that helps customers appraise all
database modernization options). “Not all customers totally understand where
they are today,” he said. Analyzing what costs could be freed-up is a part of
what IBM offers. Part of what is driving the movement toward EDB and PostgreSQL,
is that Postgres has done a good job on getting EDB working for enterprise,
according to Villalobos. “Is there any database that can finally replace Oracle
in the world,” Young asked. The interoperability and compatibility between
Oracle and EDB makes EDB a contender, he believes. Tangible difference,
according to Sheikh, include getting environments running more quickly, with
“less latency in terms of agility,” and the ability for deployment anywhere
being attractive factors for the data scientist.

Corporate culture often brings to mind ping pong tables, free beers on Fridays
and bean bags, but what about security and business continuity planning? Well,
they are about to become just as important. Cyber security needs to become front
of mind for all employees, not just those who work in IT. One way to do this is
through “war-gaming” activities, training employees and creating rapid response
plans to ensure that everyone is prepared in case of an attack. The roles of the
C-suite have been changing for quite some time now, as each role becomes less
siloed and the whole company is viewed more as a whole than as separate teams.
The evolution of the chief information security officer (CISO) is a great
example of this – rather than being a separate role to the chief information
officer (CIO) or chief technology officer (CTO), the CISO is now responsible for
security, customer retention, business continuity planning and much more. Now,
this change needs to spread to the rest of the business so that all employees
prioritise security and collaboration, whatever their level and role. The
natural result of this is that teams will become more open and better at
information sharing which will make it easier to spot when there has been a
cyber security issue, as everyone will know what is and isn’t normal across the
company.

Many transformations begin somewhere after the first turn on the Implementation
Roadmap. Agile coaches will often engage after someone has, with the best of
intentions, decided to launch an Agile Release Train (ART), but hasn’t
understood how to do so successfully. As a result, the first Program Increment,
and SAFe, will feel painful. Have you ever seen an ART that is full of handoffs
and is unable to deliver anything of value? This pattern emerges when an ART is
launched within an existing organizational silo, instead of being organized
around the flow of value. When ARTs are launched in this way, the same problems
that have existed in the organization for years become more evident and more
painful. For this reason, many Agile and SAFe implementations face a reboot at
some point. Feeling the pain, an Agile coach will help leaders understand why
they’re not getting the expected results. Here’s where organizations will
reconsider the first straight of the Implementation Roadmap, find time for
training, and re-launch their ARTs. This usually happens after going through a
Value Stream Identification and ART Identification workshop to best understand
how to organize so that ARTs are able to deliver value.

Legacy applications often based on monolithic architecture, are not only
difficult but also costly to update and maintain since the entire application’s
components are bucketed together. Even if updated, it could result in
integration complexities resulting in wastage of time and resources. By
modernizing an application to a microservices architecture, components are
smaller, loosely coupled, and can be deployed and scaled independently. With
cloud architecture growing more complex, IT services providers are expected to
deliver the best combination of services to fulfil customer needs. They need to
conduct a deep-dive analysis into the customer’s business, technical costs and
contribution of each application in the IT portfolio. Based on the functional
and technical value that each application brings along, the IT services provider
should recommend the most compatible modernization and migration approach. In
the process, they will enable rationalization and optimization of the customer’s
IT landscape. Thus, only a handful of applications which are both functionally
and technically relevant are chosen for migration to the cloud.

Some blockchains select users to add and validate the next block by having them
devote computing power to solving cryptographic riddles. That approach has been
criticized for being inefficient and energy intensive. Other blockchains give
users holding the associated cryptocurrency power to validate new blocks on
behalf of everyone else. That approach has been criticized for being too
centralized, as relatively few people hold the majority of many
cryptocurrencies. Algorand also relies on an associated cryptocurrency to
validate new blocks. The company calls the currency Algo coins. Rather than
giving the power to validate new blocks to the people with the most coins,
however, Algorand has owners of 1,000 tokens out of the 10 billion in
circulation randomly select themselves to validate the next block. The tokens
are selected in a microsecond-long process that requires relatively little
computing power. The random selection also makes the blockchain more secure by
giving no clear target to hackers, helping Algorand solve the “trilemma” put
forth by the Ethereum founder with a scalable, secure, and decentralized
blockchain.

While data science is complex, the widening data literacy gap threatens
innovation and collaboration across teams. Accenture found that 75% of employees
read data, yet only 21% “are confident with their data literacy skills.” While
organizations should invest in data literacy across the entire organization to
boost productivity, today’s data scientists should learn how to best communicate
the fundamentals behind data. The ability to explain different concepts like
variance, standard deviation, and distributions will help data scientists
explain how data was collected, what the set of data reveals about that data,
and whether it appears valid. These insights are helpful when communicating data
to other stakeholders, especially the C-suite. ... The best data scientists are
also adept storytellers, providing the necessary context about data sets and
explaining why the data is important in the larger picture. When sharing a new
set of data or the results of a data project, focus on crafting a narrative on
the top three things the audience should walk away with. Reiterate these points
throughout whatever medium you choose --- presentation, email, interactive
report, etc. -- to move your audience to action.

Containerization and orchestration also overlap two other key trends right now:
Application migration and application modernization. Migration typically refers
to moving workloads from one environment (today, usually a traditional
datacenter) to another (usually a cloud platform.) Modernization, often used as
an umbrella term, refers to the various methods of migrating applications to a
cloud environment. These run the gamut from leaving the code largely as-is to a
significant (or total) overhaul in order to optimize a workload for cloud-native
technologies. Red Hat technology evangelist Gordon Haff notes that this
spectrum bears out in Red Hat’s enterprise open source research: The 2020 report
found a healthy mix of strategies for managing legacy applications, including
“leave as-is” (31 percent), “update or modernize” (17 percent), “re-architect as
cloud-enabled (16 percent), and “re-architect as cloud-native” (14 percent).
“Leave as-is” is self-explanatory. “To whatever large degree containers are the
future, you don’t need to move everything there just because,” Haff says. But as
Red Hat’s research and other industry data reflect, many organizations are
indeed containerizing some workloads. Containers can play a role in any of the
other three approaches to legacy applications.

Although the utilities sector is typically resilient to unexpected macro
changes, current market conditions have forced every industry to drastically
accelerate digital transformation plans. The utilities sector is no exception,
which perhaps indicates the sudden shift to the cloud. As a result of this rapid
transformation, however, 55% of utilities companies admit that their security
measures haven’t kept up with the complexity of their IT infrastructure, meaning
they have less visibility and control of their data than ever before. This
‘chink in the armour’ could be their downfall when an attacker strikes. So, it
comes as no surprise that there are lingering concerns around cloud security for
two-thirds (67%) of utilities companies. Other apprehensions around cloud
adoption include reduced data visibility (59%) or risk of downtime (55%). These
are valid concerns given that two-thirds (64%) of utilities sector companies
admit that their organisation’s approach to dealing with cyber attacks could be
improved, while increasing resiliency to ransomware and data governance are
among their top three priorities.

The main enablers for this evolution are several actually, and a combination of
internal and external factors. From an internal perspective, firstly it was our
desire to further engineer and in some cases reverse-engineer our SDLC. We
wanted to increase release velocity and shorten time to market, improve
developer productivity, improve capabilities interoperability and modernize our
technological landscape; in addition, find a pragmatic equilibrium between
speed, reliability and security, while shifting the latter two as left as
sensible into the SDLC. Last but not least, we wanted to make the silos
(business to IT and IT to IT) across the internal DevOps ecosystem collapse by
improving collaboration, communication and a coalition on our DevOps vision.
From an external perspective, of course competition is a factor. In an era of
digital disruption, time to market and reliability of services and products are
vital. DevOps practices foster improvements on those quality aspects. In
addition, we as a bank are heavily regulated with several requirements and
guidelines by the FSA and EBA.
Quote for the day:
"Leadership is the creation of an
environment in which others are able to self-actualize in the process of
completing the job." -- John Mellecker
No comments:
Post a Comment