Quote for the day:
"Uncertainty is not an indication of poor leadership; it underscores the need for leadership." -- Andy Stanley
Doing authentication right
Like encryption, authentication is one of those things that you are tempted to
“roll your own” but absolutely should not. The industry has progressed enough
that you should definitely “buy and not build” your authentication solution.
Plenty of vendors offer easy-to-implement solutions and stay diligently on top
of the latest security issues. Authentication also becomes a tradeoff between
security and a good user experience. ... Passkeys are a relatively new
technology and there is a lot of FUD floating around out there about them. The
bottom line is that they are safe, secure, and easy for your users. They
should be your primary way of authenticating. Several vendors make
implementing passkeys not much harder than inserting a web component in your
application. ... Forcing users to use hard-to-remember passwords means they
will be more likely to write them down or use a simple password that meets the
requirements. Again, it may seem counterintuitive, but XKCD has it right. In
addition, the longer the password, the harder it is to crack. Let your users
create long, easy-to-remember passwords rather than force them to use shorter,
difficult-to-remember passwords. ... Six digits is the outer limit for OTP
links, and you should consider shorter ones. Under no circumstances should you
require OTPs longer than six digits because they are vastly harder for users
to keep in short-term memory.
Augmenting Software Architects with Artificial Intelligence
Technical debt is mistakenly thought of as just a source code problem, but the
concept is also applicable to source data (this is referred to as data debt)
as well as your validation assets. AI has been used for years to analyze
existing systems to identify potential opportunities to improve the quality
(to pay down technical debt). SonarQube, CAST SQG and BlackDuck’s Coverity
Static Analysis statically analyze existing code. Applitools Visual AI
dynamically finds user interface (UI) bugs and Veracode’s DAST to find runtime
vulnerabilities in web apps. The advantages of this use case are that it
pinpoints aspects of your implementation that potentially should be improved.
As described earlier, AI tooling offers to the potential for greater range,
thoroughness, and trustworthiness of the work products as compared with that
of people. Drawbacks to using AI-tooling to identify technical debt include
the accuracy, IP, and privacy risks described above. ... As software
architects we regularly work with legacy implementations that they need to
leverage and often evolve. This software is often complex, using a myriad of
technologies for reasons that have been forgotten over time. Tools such as
CAST Imaging visualizes existing code and ChartDB visualizes legacy data
schemas to provide a “birds-eye view” of the actual situation that you
face.
Keep Your Network Safe From the Double Trouble of a ‘Compound Physical-Cyber Threat'
Your first step should be to evaluate the state of your company’s cyber
defenses, including communications and IT infrastructure, and the
cybersecurity measures you already have in place—identifying any
vulnerabilities and gaps. One vulnerability to watch for is a dependence on
multiple security platforms, patches, policies, hardware, and software, where
a lack of tight integration can create gaps that hackers can readily exploit.
Consider using operational resilience assessment software as part of the
exercise, and if you lack the internal know-how or resources to manage the
assessment, consider enlisting a third-party operational resilience risk
consultant. ... Aging network communications hardware and software, including
on-premises systems and equipment, are top targets for hackers during a
disaster because they often include a single point of failure that’s readily
exploitable. The best counter in many cases is to move the network and other
key communications infrastructure (a contact center, for example) to the
cloud. Not only do cloud-based networks such as SD-WAN, (software-defined wide
area network) have the resilience and flexibility to preserve connectivity
during a disaster, they also tend to come with built-in cybersecurity
measures.
California’s AG Tells AI Companies Practically Everything They’re Doing Might Be Illegal
“The AGO encourages the responsible use of AI in ways that are safe, ethical,
and consistent with human dignity,” the advisory says. “For AI systems to
achieve their positive potential without doing harm, they must be developed
and used ethically and legally,” it continues, before dovetailing into the
many ways in which AI companies could, potentially, be breaking the law. ...
There has been quite a lot of, shall we say, hyperbole, when it comes to the
AI industry and what it claims it can accomplish versus what it can actually
accomplish. Bonta’s office says that, to steer clear of California’s false
advertising law, companies should refrain from “claiming that an AI system has
a capability that it does not; representing that a system is completely
powered by AI when humans are responsible for performing some of its
functions; representing that humans are responsible for performing some of a
system’s functions when AI is responsible instead; or claiming without basis
that a system is accurate, performs tasks better than a human would, has
specified characteristics, meets industry or other standards, or is free from
bias.” ... Bonta’s memo clearly illustrates what a legal clusterfuck the AI
industry represents, though it doesn’t even get around to mentioning U.S.
copyright law, which is another legal gray area where AI companies are
perpetually running into trouble.
Knowledge graphs: the missing link in enterprise AI
Knowledge graphs are a layer of connective tissue that sits on top of raw data
stores, turning information into contextually meaningful knowledge. So in
theory, they’d be a great way to help LLMs understand the meaning of corporate
data sets, making it easier and more efficient for companies to find relevant
data to embed into queries, and making the LLMs themselves faster and more
accurate. ... Knowledge graphs reduce hallucinations, he says, but they also
help solve the explainability challenge. Knowledge graphs sit on top of
traditional databases, providing a layer of connection and deeper
understanding, says Anant Adya, EVP at Infosys. “You can do better contextual
search,” he says. “And it helps you drive better insights.” Infosys is now
running proof of concepts to use knowledge graphs to combine the knowledge the
company has gathered over many years with gen AI tools. ... When a knowledge
graph is used as part of the RAG infrastructure, explicit connections can be
used to quickly zero in on the most relevant information. “It becomes very
efficient,” said Duvvuri. And companies are taking advantage of this, he says.
“The hard question is how many of those solutions are seen in production,
which is quite rare. But that’s true of a lot of gen AI applications.”
U.S. Copyright Office says AI generated content can be copyrighted — if a human contributes to or edits it
The Copyright Office determined that prompts are generally instructions or
ideas rather than expressive contributions, which are required for copyright
protection. Thus, an image generated with a text-to-image AI service such as
Midjourney or OpenAI’s DALL-E 3 (via ChatGPT), on its own could not qualify
for copyright protection. However, if the image was used in conjunction with a
human-authored or human-edited article (such as this one), then it would seem
to qualify. Similarly, for those looking to use AI video generation tools such
as Runway, Pika, Luma, Hailuo, Kling, OpenAI Sora, Google Veo 2 or others,
simply generating a video clip based on a description would not qualify for
copyright. Yet, a human editing together multiple AI generated video clips
into a new whole would seem to qualify. The report also clarifies that using
AI in the creative process does not disqualify a work from copyright
protection. If an AI tool assists an artist, writer or musician in refining
their work, the human-created elements remain eligible for copyright. This
aligns with historical precedents, where copyright law has adapted to new
technologies such as photography, film and digital media. ... While some
had called for additional protections for AI-generated content, the report
states that existing copyright law is sufficient to handle these issues.
From connectivity to capability: The next phase of private 5G evolution
Faster connectivity is just one positive aspect of private 5G networks; they are
the basis of the current digital era. These networks outperform conventional
public 5G capabilities, giving businesses incomparable control, security, and
flexibility. For instance, private 5G is essential to the seamless connection of
billions of devices, ensuring ultra-low latency and excellent reliability in the
worldwide IoT industry, which has the potential to reach $650.5 billion by 2026,
as per Markets and Markets. Take digital twins, for example—virtual replicas of
physical environments such as factories or entire cities. These replicas require
real-time data streaming and ultra-reliable bandwidth to function effectively.
Private 5G enables this by delivering consistent performance, turning
theoretical models into practical tools that improve operational efficiency and
decision-making. ... Also, for sectors that rely on efficiency and precision,
the private 5G is making big improvements in this area. For instance, in the
logistics sector, it connects fleets, warehouses, and ports with fast,
low-latency networks, streamlining operations throughout the supply chain. In
fleet management, private 5G allows real-time tracking of vehicles, improving
route planning and fuel use.
American CISOs should prepare now for the coming connected-vehicle tech bans
The rule BIS released is complex and intricate and relies on many pre-existing
definitions and policies used by the Commerce Department for different
commercial and industrial matters. However, in general, the restrictions and
compliance obligations under the rule affect the entire US automotive industry,
including all-new, on-road vehicles sold in the United States (except commercial
vehicles such as heavy trucks, for which rules will be determined later.) All
companies in the automotive industry, including importers and manufacturers of
CVs, equipment manufacturers, and component suppliers, will be affected. BIS
said it may grant limited specific authorizations to allow mid-generation CV
manufacturers to participate in the rule’s implementation period, provided that
the manufacturers can demonstrate they are moving into compliance with the next
generation. ... Connected vehicles and related component suppliers are required
to scrutinize the origins of vehicle connectivity systems (VCS) hardware and
automated driving systems (ADS) software to ensure compliance. Suppliers must
exclude components with links to the PRC or Russia, which has significant
implications for sourcing practices and operational processes.
What to know about DeepSeek AI, from cost claims to data privacy
"Users need to be aware that any data shared with the platform could be subject
to government access under China's cybersecurity laws, which mandate that
companies provide access to data upon request by authorities," Adrianus
Warmenhoven, a member of NordVPN's security advisory board, told ZDNET via
email. According to some observers, the fact that R1 is open-source means
increased transparency, giving users the opportunity to inspect the model's
source code for signs of privacy-related activity. Regardless, DeepSeek also
released smaller versions of R1, which can be downloaded and run locally to
avoid any concerns about data being sent back to the company (as opposed to
accessing the chatbot online). ... "DeepSeek's new AI model likely does use less
energy to train and run than larger competitors' models," confirms Peter
Slattery, a researcher on MIT's FutureTech team who led its Risk Repository
project. "However, I doubt this marks the start of a long-term trend in lower
energy consumption. AI's power stems from data, algorithms, and compute -- which
rely on ever-improving chips. When developers have previously found ways to be
more efficient, they have typically reinvested those gains into making even
bigger, more powerful models, rather than reducing overall energy usage."
The AI Imperative: How CIOs Can Lead the Charge
For CIOs, AGI will take this to the next level. Imagine systems that don't just
fix themselves but also strategize, optimize and innovate. AGI could automate
90% of IT operations, freeing up teams to focus on strategic initiatives. It
could revolutionize cybersecurity by anticipating and neutralizing threats
before they strike. It could transform data into actionable insights, driving
smarter decisions across the organization. The key is to begin incrementally,
prove the value and scale strategically. AGI isn't just a tool; it's a
game-changer. ... Cybersecurity risks are real and imminent. Picture this:
you're using an open-source AI model and suddenly, your system gets hacked.
Turns out, a malicious contributor slipped in some rogue code. Sounds like a
nightmare, right? Open-source AI is powerful, but has its fair share of risks.
Vulnerabilities in the code, supply chain attacks and lack of appropriate vendor
support are absolutely real concerns. But this is true for any new technology.
With the right safeguards, we can minimize and mitigate these risks. Here's what
I recommend: Regularly review and update open-source libraries. CIOs should
encourage their teams to use tools like software composition analysis to detect
suspicious changes. Train your team to manage and secure open-source AI
deployments.
No comments:
Post a Comment