Is AI killing freelance jobs?
Work that has previously been done by humans, such as copywriting and
developing code, is being replicated by AI-powered tools like ChatGPT and
Copilot, leading many workers to anticipate that these tools may well swipe
their jobs out from under them. And one population appears to be especially
vulnerable: freelancers. ... While writing and coding roles were the most
heavily affected freelance positions, they weren’t the only ones. For
instance, the researchers found a 17% decrease in postings related to image
creation following the release of DALL-E. Of course, the study is limited by
its short-term outlook. Still, the researchers found that the trend of
replacing freelancers has only increased over time. After splitting their nine
months of analysis into three-month segments, each progressive segment saw
further declines in the number of freelance job openings. Zhu fears that the
number of freelance opportunities will not rebound. “We can’t say much about
the long-term impact, but as far as what we examined, this short-term
substitution effect was going deeper and deeper, and the demands didn’t come
back,” Zhu says.
Can data centers keep up with AI demands?
As the cloud market has matured, leaders have started to view their IT
infrastructure through the lens of ‘cloud economics.’ This means studying the
cost, business impact, and resource usage of a cloud IT platform in order to
collaborate across departments and determine the value of cloud investments.
It can be a particularly valuable process for companies looking to introduce
and optimize AI workloads, as well as reduce energy consumption. ... As the
demand for these technologies continues to grow, businesses need to prioritize
environmental responsibility when adopting and integrating AI into their
organizations. It is essential that companies understand the impact of their
technology choices and take steps to minimize their carbon footprint.
Investing in knowledge around the benefits of the cloud is also crucial for
companies looking to transition to sustainable technologies. Tech leaders
should educate themselves and their teams about how the cloud can help them
achieve their business goals while also reducing their environmental impact.
As newer technologies like AI continue to grow, companies must prepare for the
best ways to handle workloads.
Building a Bulletproof Disaster Recovery Plan
A lot of companies can't effectively recover because they haven't planned
their tech stack around the need for data recovery, which should be central to
core technology choices. When building a plan, companies should understand the
different ways that applications across an organization’s infrastructure are
going to fail and how to restore them. ... When developing the plan,
prioritizing the key objectives and systems is crucial to ensure teams don't
waste time on nonessential operations. Then, ensure that the right people
understand these priorities by building out and training your incident
response teams with clear roles and responsibilities. Determine who
understands the infrastructure and what data needs to be prioritized. Finally,
ensure they're available 24/7, including with emergency contacts and
after-hours contact information. While storage backups are a critical part of
disaster recovery, they should not be considered the entire plan. While
essential for data restoration, they require meticulous planning regarding
storage solutions, versioning, and the nuances of cold storage.
How are business leaders responding to the AI revolution?
While AI provides a potential treasure trove of possibilities, particularly
when it comes to effectively using data, business leaders must tread carefully
when it comes to risks around data privacy and ethical implications. While
the advancements of generative AI have been consistently in the news, so too
have the setbacks major tech companies are facing when it comes to data use.
... “Controls are critical,” he said. “Data privileges may need to be extended
or expanded to get the full value across ecosystems. However, this brings
inherent risks of unintentional data transmission and data not being used for
the purpose intended, so organisations must ensure strong controls and
platforms that can highlight and visualise anomalies that may require
attention.” ... “Enterprises must be courageous around shutting down
automation and AI models that while showing some short-term gain may cause
commercial and reputational damage in the future if left unchecked.” He warned
that a current skills shortage in the area of AI might hold businesses
back.
AI development on a Copilot+ PC? Not yet
Although the Copilot+ PC platform (and the associated Copilot Runtime) shows a
lot of promise, the toolchain is still fragmented. As it stands, it’s hard to
go from model to code to application without having to step out of your IDE.
However, it’s possible to see how a future release of the AI Toolkit for
Visual Studio Code can bundle the QNN ONNX runtimes, as well as make them
available to use through DirectML for .NET application development. That
future release needs to be sooner rather than later, as devices are already in
developers’ hands. Getting AI inference onto local devices is an important
step in reducing the load on Azure data centers. Yes, the current state of
Arm64 AI development on Windows is disappointing, but that’s more because it’s
possible to see what it could be, not because of a lack of tools. Many
necessary elements are here; what’s needed is a way to bundle them to give us
an end-to-end AI application development platform so we can get the most out
of the hardware. For now, it might be best to stick with the Copilot Runtime
and the built-in Phi-Silica model with its ready-to-use APIs.
The Role of AI in Low- and No-Code Development
While AI is invaluable for generating code, it's also useful in your low- and
no-code applications. Many low- and no-code platforms allow you to build and
deploy AI-enabled applications. They abstract away the complexity of adding
capabilities like natural language processing, computer vision, and AI APIs
from your app. Users expect applications to offer features like voice prompts,
chatbots, and image recognition. Developing these capabilities "from scratch"
takes time, even for experienced developers, so many platforms offer modules
that make it easy to add them with little or no code. For example, Microsoft
has low-code tools for building Power Virtual Agents (now part of its Copilot
Studio) on Azure. These agents can plug into a wide variety of skills backed
by Azure services and drive them using a chat interface. Low- and no-code
platforms like Amazon SageMaker and Google's Teachable Machine manage tasks
like preparing data, training custom machine learning (ML) models, and
deploying AI applications.
The 5 Worst Anti-Patterns in API Management
As a modern Head of Platform Engineering, you strongly believe in
Infrastructure as Code (IaC). Managing and provisioning your resources in
declarative configuration files is a modern and great design pattern for
reducing costs and risks. Naturally, you will make this a strong foundation
while designing your infrastructure. During your API journey, you will be
tempted to take some shortcuts because it can be quicker in the short term to
configure a component directly in the API management UI than setting up a
clean IaC process. Or it might be more accessible, at first, to change the
production runtime configuration manually instead of deploying an updated
configuration from a Git commit workflow. Of course, you can always fix it
later, but deep inside, those kludges stay there forever. Or worse, your API
management product needs to provide a consistent IaC user experience. Some
components need to be configured in the UI. Some parts use YAML, others use
XML, and you even have proprietary configuration formats.
Ownership and Human Involvement in Interface Design
When an interface needs to be built between two applications with different
owners, without any human involvement, we have the Application Integration
scenario. Application Integration is similar to IPC in some respects; for
example, the asynchronous broker-based choice I would make in IPC, I would
also make for Application Integration for more or less the same reasons.
However, in this case, there is another reason to avoid synchronous
technologies: ownership and separation of responsibilities. When you have to
integrate your application with another one, there are two main facts you need
to consider: a) Your knowledge of the other application and how it works is
usually low or even nonexistent, and b) Your control of how the other
application behaves is again low or nonexistent. The most robust approach to
application integration (again, a personal opinion!) is the approach shown in
Figure 3. Each of the two applications to be integrated provides a public
interface. The public interface should be a contract. This contract can be a
B2B agreement between the two application owners.
Reports show ebbing faith in banks that ignore AI fraud threat
The ninth edition of its Global Fraud Report says businesses are worried about
the rate at which digital fraud is evolving and how established fraud threats
such as phishing may be amplified by generative AI. Forty-five percent of
companies are worried about generative AI’s ability to create more
sophisticated synthetic identities. Generative AI and machine learning are
named as the leading trends in identity verification – both the engine for,
and potential solution to, a veritable avalanche of fraud. IDology cites
recent reports from the Association of Certified Fraud Examiners (ACFE), which
say businesses worldwide lose an estimated 5 percent of their annual revenues
to fraud. “Fraud is changing every year alongside growing customer
expectations,” writes James Bruni, managing director of IDology, in the
report’s introduction. “The ability to successfully balance fraud prevention
with friction is essential for building customer loyalty and driving revenue.”
“As generative AI fuels fraud and customer expectations grow, multi-layered
digital identity verification is essential for successfully balancing fraud
prevention with friction to drive loyalty and grow revenue.”
What IT Leaders Can Learn From Shadow IT
Despite its shady reputation, shadow IT is frequently more in tune with
day-to-day business needs than many existing enterprise-deployed solutions,
observes Jason Stockinger, a cyber leader at Royal Caribbean Group, where he's
responsible for shoreside and shipboard cyber security. "When shadow IT
surfaces, organization technology leaders should work with business leaders to
ensure alignment with goals and deadlines," he advises via email. ... When
assessing a shadow IT tool's potential value, it's crucial to evaluate how it
might be successfully integrated into the official enterprise IT ecosystem.
"This integration must prioritize the organization's ability to safely adopt
and incorporate the tool without exposing itself to various risks, including
those related to users, data, business, cyber, and legal compliance,"
Ramezanian says. "Balancing innovation with risk management is paramount for
organizations to harness productivity opportunities while safeguarding their
interests." IT leaders might also consider turning to their vendors for
support. "Current software provider licensing may afford the opportunity to
add similar functionality to official tools," Orr says.
Quote for the day:
"Ninety percent of leadership is the
ability to communicate something people want." --
Dianne Feinstein
No comments:
Post a Comment