Why Large Language Models Won’t Replace Human Coders
Are any of these GenAI tools likely to become substitutes for real programmers? Unless the accuracy of coding answers supplied by models increases to within an acceptable margin of error (i.e 98-100%), then probably not. Let’s assume for argument’s sake, though, that GenAI does reach this margin of error. Does that mean the role of software engineering will shift so that you simply review and verify AI-generated code instead of writing it? Such a hypothesis could prove faulty if the four-eyes principle is anything to go by. It’s one of the most important mechanisms of internal risk control, mandating that any activity of material risk (like shipping software) be reviewed and double-checked by a second, independent, and competent individual. Unless AI is reclassified as an independent and competent lifeform, then it shouldn’t qualify as one pair of eyes in that equation anytime soon. If there’s a future where GenAI becomes capable of end-to-end development and building Human-Machine Interfaces, it’s not in the near future. LLMs can do an adequate job of interacting with text and elements of an image. There are even tools that can convert web designs into frontend code.
The future of farming
SmaXtec’s solution requires cows to swallow what the company calls a “bolus” - a
small device that consists of sensors to measure a cow’s pH and temperature, an
accelerometer, and a small processor. “It sits inside the cow and constantly
measures very important body health parameters, including temperature, the
amount of water intake, the drinking volume, the activity of the animal, and the
contraction of the rumen in the dairy cow,” Scherer said. Rumination is a
process of regurgitation and re-digestion. “You could almost envision this as a
Fitbit for cows,” he said, adding that by constantly measuring those parameters
at a high density - short timeframes with high robustness and high accuracy -
SmaXtec can make assessments about potential diseases that are about to break
out. ... Small Robot Company is known for its Tom robot. Tom - the robot -
distantly recalls memories of Doctor Who’s dog K9. The device wheels itself up
and down fields, capturing images and mapping out the land. The data is then
taken from Tom’s SSD and uploaded to the cloud, where an AI identifies the
different plants and weeds, and provides a customized fertilizer and herbicide
plan for the crops.
The CISO: 2024’s Most Important C-Suite Officer
Short- and long-term solutions to navigating increased regulatory and plaintiff
bar scrutiny start with the CISO. Cybersecurity defense strategies,
implementation and monitoring fall under the purview of the CISO, who must
closely coordinate with other members of the C-suite as well as boards of
directors. Recent lawsuits highlight individual fiduciary liability for
cybersecurity controls and accurate disclosures. Individual liability demands
increased knowledge of, participation in and shared ownership of cybersecurity
defense decisions. Gone are the days when liability risks could be eliminated by
placing the blame on a single security officer. Boards and other C-suite
executives now have personal risks over company cybersecurity defenses and
preparedness. CISOs carry primary ownership for formulating and maintaining
robust cybersecurity defenses and preparedness. This starts with implementing
secure by design and other leading security frameworks. It extends to effective
real-time threat monitoring and continual technology assessment of company
capabilities to defend against advanced cyber threats or the “Defining Threat of
Our Time.”
Generative AI and the big buzz about small language models
LLMs can create a wide array of content from text and images to audio and video,
with multimodal systems emerging to handle more than one of the above tasks.
They process massive amounts of information to execute natural language
processing (NLP) tasks that approximate human speech in response to prompts. As
such, they are ideal for pulling from vast amounts of data to generate a wide
range of content, as well as conversational AI tasks. This requires a
significant number of servers, storage and the all-too-scarce GPUs that power
the models — at a cost some organizations are unwilling or unable to bear. It’s
also tough to satisfy ESG requirements when LLMs hog compute resources for
training, augmenting, fine-tuning and other tasks organizations require to hone
their models. In contrast, SLMs consume fewer computing resources than their
larger brethren and provide surprisingly good performance — in some cases on par
with LLMs depending on certain benchmarks. They’re also more customizable,
allowing organizations to execute specific tasks. For instance, SLMs may be
trained on curated data sets and run through retrieval-augmented generation
(RAG) that help refine search. For many organizations, SLMs may be ideal for
running models on premises.
Captive centers are back. Is DIY offshoring right for you?
Captive centers are no longer just means of value creation, providing cost
savings and driving process standardization. They are driving organization-wide
innovation, facilitating digital transformations, and contributing to revenue
growth. Unlike earlier generations of what are increasingly being called “global
capabilities centers,” which tended to be large operations set up by
multinationals, more than half of last year’s new centers were launched by
first-time adopters — and on the smaller side, with less than 250 full-time
employees; in some cases, less than 50. The desire to build internal IT
capabilities amid a tight talent market is at the heart of the trend. As
companies have grown comfortable with offshore and nearshore delivery, the
captive model offers the opportunity to tap larger populations of lower-cost
talent without handing the reins to a third party. “Eroding customer
satisfaction with outsourcing relationships — per some reports, at an all-time
low — has caused some companies to opt to ‘do it themselves,’” says Dave
Borowski, senior partner, operations excellence, at West Monroe. What’s more,
establishing up a captive center no longer needs to be entirely DIY.
Questioning cloud’s environmental impact
Contrary to popular belief, cloud computing is not inherently green. Cloud data
centers require a lot of energy to power and maintain their infrastructure. That
should be news to nobody. Cloud is becoming the largest user of data center
space, perhaps only to be challenged by the growth of AI data centers, which are
becoming a developer’s dream. But wait, don’t cloud providers use solar and
wind? Although some use renewable energy, not all adopt energy-efficient
practices. Many cloud services rely on coal-fired power. Ask cloud providers
which data centers use renewable. Most will provide a non-answer, saying their
power types are complex and ever-changing. I’m not going too far out on a limb
in stating that most use nonrenewable power and will do so for the foreseeable
future. The carbon emissions from cloud computing largely stem from the power
consumed by the providers’ platforms and the inefficiencies embedded within
applications running on these platforms. The cloud provider itself may do an
excellent job in building a multitenant system that can provide good
optimization for the servers they run, but they don’t have control over how well
their customers leverage these resources.
Revolutionizing Real-Time Data Processing: The Dawn of Edge AI
For effective edge computing, efficient and computationally cost-effective
technology is needed. One promising option is reservoir computing, a
computational method designed for processing signals that are recorded over
time. It can transform these signals into complex patterns using reservoirs
that respond nonlinearly to them. In particular, physical reservoirs, which
use the dynamics of physical systems, are both computationally cost-effective
and efficient. However, their ability to process signals in real time is
limited by the natural relaxation time of the physical system. This limits
real-time processing and requires adjustments for best learning performance.
... Recently, Professor Kentaro Kinoshita, and Mr. Yutaro Yamazaki developed
an optical device with features that support physical reservoir computing and
allow real-time signal processing across a broad range of timescales within a
single device. Speaking of their motivation for the study, Prof. Kinoshita
explains: “The devices developed in this research will enable a single device
to process time-series signals with various timescales generated in our living
environment in real-time. In particular, we hope to realize an AI device to
utilize in the edge domain.”
Agile software promises efficiency. It requires a cultural shift to get right
The end result of these fake agile practices is lip service and ceremonies at
the expense of the original manifesto’s principles, Bacon said. ... To
get agile right, Wickham recommended building on situations in your
organization where agile is practiced relatively effectively. Most often, that
involves teams building internal tools, such as administrative panels for
customer support or CI/CD pipelines. Those use cases have more tolerance for
“let’s put something up, ask for feedback, iterate, repeat,” he said. After
all, internal customers expect to accept seeing something that’s initially
imperfect. “This indicates to me that people comprehend agile and have at
least a baseline understanding of how to use it, but a lack of willingness
to use it as defined when it comes to external customers,” said Wickham.
... “Agile is an easy term to toss around as a ‘solution,’” Richmond said.
“But effective agile does not have a cookie-cutter solution to improving
execution.” Getting it right requires a focus on what has to happen to
understand the company’s challenges, how those challenges manifest out of the
business environment, in what way those challenges impact business outcomes,
and then, finally, identifying how to apply agile concepts to the business.
Building a Strong Data Culture: A Strategic Imperative
Effective executive backing is crucial for prioritizing and financing data
initiatives that help cultivate an organization’s data-centric culture.
Initiatives such as data literacy programs equip employees with vital data
skills that are fundamental to fostering such a culture. Nonetheless, these
programs often fail to thrive without the robust support of leadership.
Results from the same Alation research show that only 15 percent of companies
with moderate or weak data leadership integrate data literacy across most
departments or throughout the entire organization. This is in stark contrast
to the 61 percent adoption rate in companies with strong data leadership.
Moreover, strong data leadership involves more than just endorsement; it
requires executives to actively engage and set an example in data culture
initiatives. For instance, when an executive carves out time from her hectic
schedule to partake in data literacy training, it conveys a much more powerful
message to her team than if she were to simply instruct others to prioritize
such training. This hands-on approach by leaders underscores the importance of
data literacy and demonstrates their commitment to embedding a data-driven
culture in the organization.
Cybercriminals harness AI for new era of malware development
Threat actors have already shown how AI can help them develop malware only
with a limited knowledge of programming languages, brainstorm new TTPs,
compose convincing text to be used in social engineering attacks, and also
increase their operational productivity. Large language models such as ChatGPT
remain in widespread use, and Group-IB analysts have observed continued
interest on underground forums in ChatGPT jailbreaking and specialized
generative pre-trained transformer (GPT) development, looking for ways to
bypass ChatGPT’s security controls. Group-IB experts have also noticed how,
since mid-2023, four ChatGPT-style tools have been developed for the purpose
of assisting cybercriminal activity: WolfGPT, DarkBARD, FraudGPT, and WormGPT
– all with different functionalities. FraudGPT and WormGPT are highly
discussed tools on underground forums and Telegram channels, tailored for
social engineering and phishing. Conversely, tools like WolfGPT, focusing on
code or exploits, are less popular due to training complexities and usability
issues. Yet, their advancement poses risks for sophisticated attacks.
Quote for the day:
"It takes courage and maturity to know
the difference between a hoping and a wishing." --
Rashida Jourdain
No comments:
Post a Comment