Will blockchain fulfil its democratic promise or will it become a tool of big tech?
It’s easy to see why the blockchain idea evokes utopian hopes: at last,
technology is sticking it to the Man. In that sense, the excitement
surrounding it reminds me of the early days of the internet, when we really
believed that our contemporaries had invented a technology that was
democratising and liberating and beyond the reach of established power
structures. ... What we underestimated, in our naivety, were the power of
sovereign states, the ruthlessness and capacity of corporations and the
passivity of consumers, a combination of which eventually led to corporate
capture of the internet and the centralisation of digital power in the hands
of a few giant corporations and national governments. ... Will this happen to
blockchain technology? Hopefully not, but the enthusiastic endorsement of it
by outfits such as Goldman Sachs is not exactly reassuring. The problem with
digital technology is that, for engineers, it is both intrinsically
fascinating and seductively challenging, which means that they acquire a kind
of tunnel vision: they are so focused on finding solutions to the technical
problems that they are blinded to the wider context.
Ultra-Long Battery Life Is Coming … Eventually
Experts say battery life is getting better in consumer electronics—through a
combination of super-efficient processors, low-power states, and a little help
from advanced technologies like silicon anode. It’s just not necessarily
getting 10 times better. Conventional lithium-ion batteries have their energy
density limits, and they typically improve by single-digit percentages each
year. And there are downsides to pushing the limits of energy density.
“Batteries are getting a little bit better, but when batteries get better in
energy density, there’s usually a trade-off with cycle life,” says Venkat
Srinivasan, who researches energy storage and is the director of the Argonne
Collaborative Center for Energy Storage Science. “If you go to the big
consumer electronics companies, they’ll have a metric they want to achieve,
like we need the battery to last for 500 cycles over two or three years. But
some of the smaller companies might opt for longer run times, and live with
the fact that the product might not last two years.”
7 obstacles that organizations face migrating legacy data to the cloud
Asked why they're looking to move their legacy data off-premises and to the
cloud, 46% of the executives cited regulatory compliance as the top reason.
Some 38.5% pointed to cost savings as the biggest reason, while 8.5% mentioned
business intelligence and analytics. The survey also asked respondents to
identify the features and benefits that would most influence them to move
their legacy data to the cloud. The major benefit cited by 66% was the
integration of data and legacy archives. Some 59% cited the cloud as a way to
centrally manage the archiving of all data including data from Office 365.
Other reasons mentioned included data security and encryption, advanced
records management, artificial intelligence-powered regulatory and compliance
checking, and fast and accurate centralized search. Of course, anxiety over
cyber threats and cyberattacks also plays a role in the decision to migrate
legacy data. Among the respondents, 42% said that concerns over cybersecurity
and ransomware attacks slightly or significantly accelerated the migration
plans.
View cloud architecture through a new optimization lens
IT and enterprise management in general is getting wise to the fact that a
solution that “works” or “seems innovative” does not really tell you why
operations cost so much more than forecast. Today we need to audit and
evaluate the end state of a cloud solution to provide a clear measure of its
success. The planning and development phases of a cloud deployment are great
places to plan and build in audit and evaluation procedures that will take
place post-development to gauge the project’s overall ROI. This
end-to-beginning view will cause some disturbance in the world of those who
build and deploy cloud and cloud-related solutions. Most believe their designs
and builds are cutting edge and built with the best possible solutions
available at the time. They believe their designs are as optimized as
possible. In most instances, they’re wrong. Most cloud solutions implemented
during the past 10 years are grossly underoptimized. So much so that if
companies did an honest audit of what was deployed versus what should have
been deployed, a very different picture of a truly optimized cloud solution
would take shape.
How Blockchain Startups Think about Databases and dApp Efficiency
When applications are built on top of a blockchain, these applications are
inherently decentralized — hence referred to as dApps (decentralized
applications). Most dApps today leverage a Layer 1 (L1) blockchain technology
like Ethereum as their primary form of storage for transactions. There are two
primary ways that dApps interact with the underlying blockchain: reads and
writes. Let’s use an NFT and gaming dApp that rewards gamers who win coins
that they can then use to purchase NFTs as an example: Writes are performed to
an L1 chain whenever a gamer wins and coins are added to their wallet; reads
are performed when a gamer logs into the game and needs to pull the associated
NFT metadata for their game character (think stats, ranking, etc.). As an
early-stage dApp building the game described above, writing directly to
Ethereum is prohibitive because of slow performance (impacting latency) and
high cost. To help developers in the dApp ecosystem, sidechains and Layer 2
(L2) solutions like Polygon improve performance.
Google calls for new government action to protect open-source software projects
“We need a public-private partnership to identify a list of critical open
source projects — with criticality determined based on the influence and
importance of a project — to help prioritize and allocate resources for the
most essential security assessments and improvements,” Walker wrote. The blog
post also called for an increase in public and private investment to keep the
open-source ecosystem secure, particularly when the software is used in
infrastructure projects. For the most part, funding and review of such
projects are conducted by the private sector. The White House had not
responded to a request for comment by time of publication. “Open source
software code is available to the public, free for anyone to use, modify, or
inspect ... That’s why many aspects of critical infrastructure and national
security systems incorporate it,” wrote Walker. “But there’s no official
resource allocation and few formal requirements or standards for maintaining
the security of that critical code. In fact, most of the work to maintain and
enhance the security of open source, including fixing known vulnerabilities,
is done on an ad hoc, volunteer basis.”
How AI Can Improve Software Development
By leveraging AI to automate the identification of the specific lines of code
that require attention, developers can simply ask this AI-driven knowledge
repository where behaviors are coming from—and quickly identify the code
associated with that behavior. This puts AI squarely in the position of
intelligence augmentation, which is key to leveraging its capabilities. This
novel approach of AI reinterprets what the computation represents and converts
it into concepts, therefore “thinking” about the code in the same way humans
do. The result is that software developers no longer have to unearth the
intent of previous developers encoded in the software to find potential bugs.
Even better, developers are able to overcome the inadequacies of automated
testing by using AI to validate that they haven’t broken the system before
they compile or check in the code. The AI will forward simulate the change and
determine whether it’s isolated to the behavior under change. The result is
the bounds of the change are confined to the behavior under change so that no
unintended consequences arise.
A busy year ahead in low-code and no-code development
There's logic to developers embracing low-code and no-code methodologies.
"Developers love to code, but what they love more is to create, regardless the
language," says Steve Peak, founder of Story.ai. "Developers are always
seeking new tools to create faster and with more enjoyment. Once low and no
code grows into a tool that developers have more control over what they truly
need to get done; they unquestionably will use them. It helps them by getting
work done quicker with more enjoyment, examples of this are everywhere and are
engrained into most developers. A seek for the next, better thing." At the
same time, there is still much work to be done -- by professional developers,
of course -- before true low-code or no-code capabilities are a reality. "Even
the most popular tools in the market requite significant API knowledge and
most likely JavaScript experience," says Peak. "The products that do not
require API or JavaScript experience are limited in functionality and often
resemble that of custom Kanban boards and more media rich spreadsheets wherein
information logic is mostly entirely absent."
The Future of the Metaverse + AI and Data Looks Bright
The next generation of VR headsets will collect more user information,
including detecting the stress level of the user, and even facial recognition.
“We’re going to see more capabilities and really understanding the biometrics
that are generated from an individual, and be able to use that to enhance the
training experience,” he says. That data collection will enable a feedback
loop with the VR user. For example, if an enterprise is using VR to simulate a
lineman repairing a high-voltage wire, the headset will be able to detect the
anxiety level of the user. That information will inform the enterprise how to
personalize the next set of VR lessons for the employee, Eckert says.
“Remember, you’re running nothing more than software on a digital device, but
because it senses three dimensions, you can put input through gesture hand
control, through how you gaze, where you gaze. It’s collecting data,” he says.
“Now that data can then be acted upon to create that feedback loop. And that’s
why I think it’s so important. In this immersive world that we have, that
feedback …will make it even that much more realistic of an experience.”
Data Engineering and Analytics: The End of (The End Of) ETL
Data virtualization does not purport to eliminate the requirement to transform
data. In fact, most DV implementations permit developers, modelers, etc., to
specify and apply different types of transformations to data at runtime. Does
DAF? That is, how likely is it that any scheme can eliminate the requirement
to transform data? Not very likely at all. Data transformation is never an end
unto itself. It is rather a means to the end of using data, of doing stuff
with data. ... Because this trope is so common, technology buyers should be
savvy enough not to succumb to it. Yet, as the evidence of four decades of
technology buying demonstrates, succumb to it they do. This problem is
exacerbated in any context in which (as now) the availability of new,
as-yet-untested technologies fuels optimism among sellers and buyers alike.
Cloud, ML and AI are the dei ex machina of our age, contributing to a built-in
tolerance for what amounts to utopian technological messaging. That is, people
not only want to believe in utopia -- who wouldn’t wish away the most
intractable of sociotechnical problems? -- but are predisposed to do so.
Quote for the day:
"Authority without wisdom is like a
heavy axe without an edge, fitter to bruise than polish." --
Anne Bradstreet
No comments:
Post a Comment