Google DeepMind takes step closer to cracking top-level maths
Unlike a human mathematician, the systems were either flawless or hopeless. In
each of the questions they solved, they scored perfect marks, but for two out
of the six questions, they were unable to even begin working towards an
answer. Moreover, DeepMind, unlike humancompetitors, was given no time limit.
While students get nine hours to tackle the problems, the DeepMind systems
took three days working round the clock to solve one question, despite
blitzing another in seconds. ... “What we try to do is to build a bridge
between these two spheres,” said Thomas Hubert, the lead on AlphaProof, “so
that we can take advantage of the guarantees that come with formal mathematics
and the data that is available in informal mathematics.” After it was trained
on a vast number of maths problems written in English, AlphaProof used its
knowledge to try to generate specific proofs in the formal language. Because
those proofs can be verifiably true or not, it is possible to teach the system
to improve itself. The approach can solve difficult problems, but isn’t always
fast at doing so: while it is far better than simple trial and error, it took
three days to find the correct formal model for one of the hardest questions
in the challenge.
Modern Leaders: Steps To Guide An Organization Through Rapid Growth
With the culture and compass set, leaders need to hold each other accountable
for acting according to established norms. Performative behavior is rampant
because we often turn a blind eye to issues. Passive bullying occurs when we
don't stand up for someone because it's easier to stay uninvolved. Leaders
must be willing to put their necks on the line for each other to build real
trust. People should feel free to come and go. Create systems where they feel
comfortable, feel accepted and can be seen and heard. Leaders must understand
that they can't force these connections but must genuinely care about their
employees. Performative leadership will fail, as people value authenticity
over money and power today. It is an exchange if you want them to care about
you. ... “Don’t think in silos!” C-suites may say. Program managers, change
agents, integration managers, transformational offices, OKR champions and DEI
leaders don’t see silos. Neither does a chief of staff and many HR leaders. We
understand nurturing teams of people and goals from an unbiased perspective is
effective for everyone. Unfortunately, these roles often have no authority or
support, and they face a lot of adversity.
Nvidia Embraces LLMs & Commonsense Cybersecurity Strategy
One thing that sets AI environments apart from their more traditional IT
counterparts is their ability for autonomous agency. Companies do not just
want AI applications that can automate the creation of content or analyze
data, they want models that can take action. As such, those so-called agentic
AI systems do pose even greater potential risks. If an attacker can cause an
LLM to do something unexpected, and the AI systems has the ability to take
action in another application, the results can be dramatic, Harang says.
"We've seen, even recently, examples in other systems of how tool use can
sometimes lead to unexpected activity from the LLM or unexpected information
disclosure," he says, adding: "As we develop increasing capabilities —
including tool use — I think it's still going to be an ongoing learning
process for the industry." Harang notes that even with the greater risk, it's
important to realize that it's a solvable issue. He himself avoids the "sky is
falling" hyperbole around the risk of GenAI use, and often taps it to hunt
down specific information, such as the grammar of a specific programming
function, and to summarize academic papers.
AI tutors could be coming to the classroom – but who taught the tutor, and should you trust them?
If AI systems are trained on biased data or without considering diverse
perspectives, there is a high likelihood decisions being made based on these
systems will favour one group over others, reinforce stereotypes, and ignore
or undervalue different ways of living and thinking. The concern isn’t just
about the influence AI can have on us but also how AI consumes and processes
data. ... If done well, a walled garden approach might provide a
comprehensive, inclusive, culturally sustaining pathway to better learning.
However, given the challenges of such an undertaking (never mind the expense),
the chances of success in practice are extremely small. Meanwhile, we can’t
just wait for AI tutors. AI is a reality in schools, and we need to prepare
students for what they face now and in the future. Specific tools are
important, but our focus should be on developing AI literacy across the
educational sector. This is why we are researching what it means to be AI
literate and how this can empower critical evaluation and ethical use,
ensuring AI complements rather than replaces human teaching.
The case for multicloud: Lessons from the CrowdStrike outage
Multicloud isn’t a magical cure for all that could go wrong. It’s an
architectural option with good and bad aspects. The recent outage or something
like it will likely drive many enterprises to multicloud for the wrong
reasons. We saw this during the pandemic when many new customers rushed to
public cloud providers for the wrong reasons. Today, we’re still fixing the
fallout from those impetuous decisions. Enterprises should thoroughly assess
their workloads and identify their critical applications before they implement
a multicloud strategy. Selecting appropriate cloud providers based on their
strengths and services is essential. You must manage multiclouds as a
collection of integrated systems. Today, many enterprises view multicloud as a
collection of silos. The silo approach will fail, either right away or
eventually. Treating multicloud as a collection of systems will require more
attention to detail, which translates into more upfront time and money to get
the plan right the first time. It’s still the best route because doing it the
second time is usually twice as expensive.
The Next Phase of Automation: Learning From the Past and Looking to the Future
There’s a tendency for workers to catastrophize AI tools in fear that it will
eliminate their jobs. But this is the wrong way to look at it. AI won’t
replace you—it’s a tool that should be leveraged for its capacity to increase
your value in the workspace. Learn to harness the capabilities of AI
automation to remain competitive as the market evolves, otherwise you risk
becoming obsolete. ... AI will get better over time, but you need to be
realistic about its current capabilities. Simple task automation doesn’t
require complicated backend adapters. Find ways for this to aid you in your
daily tasks to automate simple, repetitive tasks. Stay on top of changes as
this evolves over time to automate more complex processes. ... Your automation
strategy should avoid focusing solely on AI tools. There are plenty of
automated tools that don’t use AI to perform very useful functions. The tools
you source today will set you up for the future, so it’s important to find a
full suite of automation tools that can handle the majority of your automation
needs. Utilizing a singular vendor ensures these tools work together
seamlessly and avoids coverage gaps.
Combating Shadow AI: Implementing Controls for Government Use of AI
Governance must start from a policy or mission perspective rather than a
technology perspective. Understanding the role of AI in government programs
from both a benefit and risk perspective takes intentionality by leadership to
appoint focused teams that can evaluate the potential intersections of AI
platforms and the mission of the agency. The increase of public engagement
through technology creates an accessible rich set of data that AI platforms
can use to train their models. Organizations may choose a more conservative
approach by blocking all AI crawlers until the impact of allowing those
interactions is understood. For those entities that see benefit for legitimate
crawling of public properties, the ability to allow legitimate and controlled
access by verified AI crawlers while protecting against the bad is critical in
today’s environment. From within the organization, establishing what roles and
tasks require access to AI platforms is a critical early step in getting ahead
of increased regulations.
The AI blueprint: Engineering a new era of compliance in digital finance
The dynamism of the regulatory environment, along with RBI’s robust oversight
underscores the necessity for constant adaptation within the industry. As
regulatory compliance requirements continue to evolve, organisations are
required to maintain high standards and avoid legal risks. The advent of AI in
regulation and compliance provides numerous use cases. ... In the
ever-evolving legal and compliance landscape, staying ahead of the curve can
potentially save businesses hefty penalties and lawsuits. With NLP
capabilities, AI-driven solutions are instrumental in monitoring and
predicting changes in the regulatory system. ... AI has the power to
streamline compliance-based screenings with accuracy and efficiency. Banks and
financial institutions receive multiple alerts and notifications; it becomes a
tedious task to filter through them all. AI helps in assessing alerts,
identifying patterns, and providing solutions for further action. It assists
in swiftly screening customer profiles for fraudulent patterns and anomalies,
which is crucial for the mandatory KYC process.
Can AI Truly Transform the Developer Experience?
Coding assistants and the other developer-focused AI capabilities I mentioned
will likely enhance rather than improve DevEx. To be clear, developers should
be given access to these tools; they want to use them, find them helpful, and,
most importantly, expect access to them. However, these tools are unlikely to
resolve the existing DevEx issues within your organization. Improving DevEx
starts with asking your developers what needs improvement. Once you have this
list (which is likely to be extended), you can identify the best way to solve
these challenges, which may include using AI. ... Using AI to remove the
developer toil associated with resolving tech debt improves one of the common
challenges to a good DevEx. It allows developers to commit more time to tasks
like releasing new features. We’ve had great feedback on AutoFix internally,
and we’re working on making this available to customers later this year. The
currently available AI capabilities for developers are pretty impressive.
Developers expect access to these tools to assist them with their daily tasks,
which is a good enough reason to provide access to them.
The secret to CIO success? Unleash your inner CEO
It is now understood to a far greater degree than in the past that the CIO
must align technologies with business goals to achieve maximum outcomes, but
that does not mean the CIO has to do it all — or take a CEO-like role. Every
C-suite role is evolving thanks to dramatic technological change. Moreover,
the C-suite continues to expand, with IT leaders increasingly stepping in to
take on these new leadership roles. Still, there is no doubt the CIO role has
evolved in prominence, prestige, and power to be an agent of change. “Saying
the CIO will replace the CEO is a stretch, but CIOs being viable candidates as
successors for the top spot is real, as business and technology strategy
converge,” notes Ashley Skyrme, senior managing director at Accenture, who
views the change more as one that will transform the qualities looked for when
hiring the next generation of CEOs. “What is the new CEO profile in an era of
genAI and data-led business models? How does this change CEO succession
planning and who you select as your CIO?” she asks. The answers to those
questions will determine “what CEOs need to learn and what CIOs need to learn”
to succeed in the future, Skyrme says.
Quote for the day:
"The distance between insanity and
genius is measured only by success." -- Bruce Feirstein
No comments:
Post a Comment