Is The Public Losing Trust In AI?
Of course, the simplest way to look at this challenge is that in order for
people to trust AI, it has to be trustworthy. This means it has to be
implemented ethically, with consideration of how it will affect our lives and
society. Just as important as being trustworthy is being seen to be trustworthy.
This is why the principle of transparent AI is so important. Transparent AI
means building tools, processes, and algorithms that are understandable to
non-experts. If we are going to trust algorithms to make decisions that could
affect our lives, we must, at the very least, be able to explain why they are
making these decisions. What factors are being taken into account? And what are
their priorities? If AI needs the public's trust, then the public needs to be
involved in this aspect of AI governance. This means actively seeking their
input and feedback on how AI is used. Ideally, this needs to happen at both a
democratic level, via elected representatives, and at a grassroots level. Last
but definitely not least, AI also has to be secure. This is why we have recently
seen a drive towards private AI – AI that isn't hosted and processed on huge
public data servers like those used by ChatGPT or Google Gemini.
Reliable Distributed Storage for Bare-Metal CAPI Cluster
By default, most CAPI solutions will use the “Expand First” (or
“RollingUpdateScaleOut” in CAPI terms) repave logic. This logic will install an
additional fresh new server and add it to the cluster first, before then
removing an old server. While this is useful to ensure the cluster never has
less total compute capacity than before you started the repave operation, it is
problematic for distributed storage clusters because you are introducing a new
node without any data to the cluster, while taking away a node that does contain
data. So instead, we want to use the “Contract First” repave logic for the pool
of storage nodes. That way, we can remove a storage node first, then reinstall
it and add it back to the cluster, thereby immediately restoring data
redundancy. ... So, if a different issue causes the distributed storage software
to not install properly on the new node, you can still run into trouble. For
example, Portworx supports specific kernel versions, and installing new nodes
with a kernel version it doesn’t support can prevent the installation from
succeeding. For that reason, it’s a good idea to lock the kernel version that
MaaS deploys. Reach out to us if you want to learn how to achieve that.
Evaluating databases for sensor data
The primary determinant in choosing a database is understanding how an
application’s data will be accessed and utilized. A good place to begin is by
classifying workloads as online analytical processing (OLAP) or online
transaction processing (OLTP). OLTP workloads, traditionally handled by
relational databases, involve processing large numbers of transactions by
large numbers of concurrent users. OLAP workloads are focused on analytics and
have distinct access patterns compared to OLTP workloads. In addition, whereas
OLTP databases work with rows, OLAP queries often involve selective column
access for calculations. ... Another consideration when selecting a database
is the internal team’s existing expertise. Evaluate whether the benefits of
adopting a specialized database justify investing in educating and training
the team and whether potential productivity losses will appear during the
learning phase. If performance optimization isn’t critical, using the database
your team is most familiar with may suffice. However, for performance-critical
applications, embracing a new database may be worthwhile despite initial
challenges and hiccups.
Surviving the “quantum apocalypse” with fully homomorphic encryption
There are currently two distinct approaches to face an impending “quantum
apocalypse”. The first uses the physics of quantum mechanics itself and is
called Quantum Key Distribution (QKD). However, QKD only really solves the
problem of key distribution, and it requires dedicated quantum connections
between the parties. As such, it is not scalable to solve the problems of
internet security; instead, it is most suited to private connections between
two fixed government buildings. It is impossible to build internet-scale,
end-to-end encrypted systems using QKD. The second solution is to utilize
classical cryptography but base it on mathematical problems for which we do
not believe a quantum computer gives any advantage: this is the area of
post-quantum cryptography (PQC). PQC algorithms are designed to be essentially
drop-in replacements for existing algorithms, which would not require many
changes in infrastructure or computing capabilities. NIST has recently
announced standards for public key encryption and signatures which are
post-quantum secure. These new standards are based on different mathematical
problems
Teams, Slack, and GitHub, oh my! – How collaborative tools can create a security nightmare
Fast and efficient collaboration is essential to today’s business, but the
platforms we use to communicate with colleagues, vendors, clients, and
customers can also introduce serious risks. Looking at some of the most common
collaboration tools — Microsoft Teams, GitHub, Slack, and OAuth — it’s clear
there are dangers presented by information sharing, as valuable as that is to
business strategy. Any of these, if not safeguarded or used inappropriately,
can be a tool for attackers to gain access to your network. The best
protection is to ensure you are aware of these risks and apply the appropriate
modifications and policies to your organization to help prevent attackers from
gaining a foothold in your organization — that also means acknowledging and
understanding the threats of insider risk and data extraction. Attackers often
know your network better than you do. Chances are, they also know your
data-sharing platforms and are targeting those as well. Something as simple as
improper password sharing can allow a bad actor to phish their way into a
company’s network and collaboration tools can present a golden opportunity.
Improving computational performance of AI requires upskilling of professionals in Embedded/VLSI area
Implementing AI systems or applications requires intensive computational
processors and low-cost power to deploy algorithms. Here, Very Large Scale
Integration (VLSI) and embedded system design play a critical role. VLSI
design involves the creation and miniaturisation of complex circuits, such as
processors, memory circuits, and more recently, customized hardware for AI
applications. On the other hand, embedded systems are computing systems for
dedicated or specific functionalities, such as smart agriculture or industrial
automation. The integration of VLSI with AI has the potential to revolutionise
various sectors by enabling faster, more power-efficient, and customised
hardware for AI applications. ... AI-based solutions are applied in designing
and deploying communication systems to significantly enhance network
performance and thereby the overall user experience. Dynamic allocation of
resources, such as power and bandwidth, can be done efficiently by AI
algorithms, which leads to improved spectral efficiency, reduced interference,
and power consumption. Intelligent beam forming using AI algorithms enables
wireless systems to focus their power and frequency band for specific users or
devices.
Microsoft announces collaboration with NVIDIA to accelerate healthcare and life sciences innovation with advanced cloud, AI and accelerated computing capabilities
Microsoft, NVIDIA and SOPHiA GENETICS are collaborating to leverage combined
expertise in technology and genomics to develop a streamlined, scalable and
comprehensive whole-genome analytical solution. As part of this collaboration,
the SOPHiA DDM Software-as-a-Service platform, hosted on Azure, will be
powered by NVIDIA Parabricks for SOPHiA DDM’s whole genome application.
Parabricks is a scalable genomics analysis software suite that leverages
full-stack accelerated computing to process whole genomes in minutes.
Compatible with all leading sequencing instruments, Parabricks supports
diverse bioinformatics workflows and integrates AI for accuracy and
customization. ... Microsoft aims to propel healthcare and life sciences into
an exciting new era of medicine, helping unlock transformative possibilities
for patients worldwide. The combination of the global scale, security and
advanced computing capabilities of Microsoft Azure with NVIDIA DGX Cloud and
the NVIDIA Clara suite is set to accelerate advances in clinical research,
drug discovery and care delivery.
How Deloitte navigates ethics in the AI-driven workforce: Involve everyone
The approach to developing an ethical framework for AI development and
application will be unique for each organization. They will need to determine
their use cases for AI as well as the specific guardrails, policies, and
practices needed to make sure that they achieve their desired outcome while
also safeguarding trust and privacy. Establishing these ethical guidelines --
and understanding the risks of operating without them -- can be very complex.
The process requires knowledge and expertise across a wide range of
disciplines. ... On a broader level, publishing clear ethics policies and
guidelines, and providing workshops and trainings on AI ethics, were ranked in
our survey as some of the most effective ways to communicate AI ethics to the
workforce, and thereby ensure that AI projects are conducted with ethics in
mind. ... Leadership plays a crucial role in underscoring the importance of AI
ethics, determining the resources and experience needed to establish the
ethics policies for an organization, and ensuring that these principles are
rolled out. This was one reason we explored the topic of AI ethics from the
C-suite perspective.
How to stop data from driving government mad
This would be a start, but everybody in large organisations knows that
top-down initiatives from the centre rarely work well at the coalface. If the
JAAC is to be effective at converting data into information, what insight
could it glean from structures that have evolved to do this? And what could it
learn from scientific fields that manage this successfully? First, deep neural
networks learn by repeatedly passing information back and forth until every
neurone is tuned to achieve the same objective. Information flow in both
directions is the key. Neil Lawrence, DeepMind professor of machine learning
at the University of Cambridge, notes that in government, "People at the coal
face have a better understanding of the right interventions, although not what
the central policy might be; a successful centre will have a co-ordinating
function driven by an AI strategy, but will devolve power to the departments,
professions, and regulators to implement it." Or, as Jess Montgomery, director
of AI@Cam says: "Getting government data - and AI - ready will require
foundational work, for example in data curation and pipeline building."
Continuous Improvement as a Team
Conducting regular Retrospectives enables teams to pause and reflect on their
past actions, practices, and workflows, pinpointing both strengths and areas for
improvement. This continuous feedback loop is critical for adapting processes,
enhancing team dynamics, and ensuring the team remains agile and responsive to
change. Guarantee the consistency of your Retrospectives at every Sprint's
conclusion. Before these sessions, collaboratively plan an agenda that promotes
openness and inclusivity. Facilitators should incorporate practices such as
anonymous feedback mechanisms and engaging games to ensure honest and
constructive discussions, setting the stage for meaningful progress and team
development. ... Effective stakeholder collaboration ensures the team’s efforts
align with the broader business goals and customer needs. Engaging stakeholders
throughout the development process invites diverse perspectives and feedback,
which can highlight unforeseen areas for improvement and ensure that the product
development is on the right track. Engage your stakeholders as a team, starting
with the Sprint Reviews.
Quote for the day:
“There's a lot of difference between
listening and hearing.” -- G. K. Chesterton
No comments:
Post a Comment