Integrating and Scaling AI Solutions with Modular Architecture
The modular AI ecosystem is a fluid environment comprising various players that contribute to the democratization and commoditization of AI technologies. Foundational model providers (e.g., ChatGPT and Koala) create core capabilities and specialized SLMs. Enterprise AI solution providers (e.g., Kore AI and Haptik) build prepackaged and customized domain and industry-specific solutions. AI service providers (e.g., HuggingFace and Scale AI) offer platforms to build AI models and provide services such as data labeling, prompt engineering, and fine-tuning AI models. Infrastructure players (e.g., AWS and Azure) provide cloud services to host AI models, data storage and management solutions, and high-performance computing resources. This ecosystem facilitates the rapid innovation of AI technologies while broadening their reach. ... Adopting modular AI architectures offers significant opportunities but also presents challenges. While the transition and upfront investment can be costly and demanding, particularly for legacy-laden enterprises, the potential benefits — such as enhanced agility, lower costs, and easier access to specialized AI tools — are interesting.Why cloud security outranks cost and scalability
As businesses integrate cloud computing, they grapple with escalating
complexity and cyberthreats. To remain agile and competitive, they embrace
cloud-native design principles, an operational model that allows for
independence and scalability through microservices and extensive API usage.
However, this does not come without its challenges. ... Complex cloud
environments mean that adopting cloud-native designs introduces layers of
complexity. Ensuring security across distributed components (microservices and
APIs) becomes crucial, as misconfigurations or vulnerabilities can lead to
significant risks. I’ve been screaming about this for years, along with
others. Although we accept complexity as a means to an end in terms of IT, it
needs to be managed in light of its impact on security. Compliance and
regulatory pressures mean that many industries face strict regulations
regarding data protection and privacy (e.g., GDPR, CCPA). Ensuring compliance
requires robust security measures to protect sensitive information in the
cloud. Many enterprises are moving to sovereign or local clouds that are local
to the laws and regulations they adhere to. Companies view this as reducing
risk; even if those clouds are more expensive, the risk reduction is worth it.
Kaspersky confirmed the issue on the company's official forums on Sunday and
said that it's currently investigating why its software is no longer available
on Google's app store. "The downloads and updates of Kaspersky products are
temporarily unavailable on the Google Play store," a Kaspersky employee said.
"Kaspersky is currently investigating the circumstances behind the issue and
exploring potential solutions to ensure that users of its products can
continue downloading and updating their applications from Google Play." While
the apps are unavailable, Kaspersky advised users to install them from
alternative app stores, including the Galaxy Store, Huawei AppGallery, and
Xiaomi GetApps. The company's security apps can also be installed by
downloading the .apk installation file from Kaspersky's website. This support
page provides more information on how to install and activate Kaspersky's
software on Android devices. This comes after Kaspersky told BleepingComputer
in July that it would shut down its United States operations after the U.S.
government sanctioned the company and 12 executives and banned Kaspersky
antivirus software over national security concerns in June.
How to Get Going with CTEM When You Don't Know Where to Start
Continuous Threat Exposure Management (CTEM) is a strategic framework that
helps organizations continuously assess and manage cyber risk. It breaks down
the complex task of managing security threats into five distinct stages:
Scoping, Discovery, Prioritization, Validation, and Mobilization. Each of
these stages plays a crucial role in identifying, addressing, and mitigating
vulnerabilities - before they can be exploited by attackers. ... As
transformational as CTEM is, many teams see the list above and understandably
back off, feeling it's too complex and nuanced of an undertaking. Since the
inception of CTEM, some teams have chosen to forgo the benefits, because even
with a roadmap, it seems just too cumbersome of a lift for them. The most
productive way to make CTEM a very attainable reality is with a unified
approach to CTEM that simplifies implementation by integrating all the
multiple stages of CTEM into one cohesive platform. ... XM Cyber's unified
approach to CTEM simplifies implementation by integrating multiple stages into
one cohesive platform. This minimizes the complexity associated with deploying
disparate tools and processes.
Microsoft Sees Devs Embracing a ‘Paradigm Shift’ to GenAIOps
“One of the key differences with GenAI compared to classic machine learning is that in almost all cases, the GenAI model was not built by the developers’ organization; rather it licensed it or accessed it via an API or downloaded it from an open source repository such as Hugging Face,” Patience told The New Stack. “That puts a greater importance on choosing the right models for the task. Contrast that with narrower predictive models using classic machine learning which were usually built and trained using the organization’s own data.” Many LLMs are massive in size and GenAIOps will bring a more orderly process to collecting, curating, cleaning, and creating proper data sets and the proper measured creation of models with specific checkpoints, Andy Thurai, principal analyst at Constellation Research, told The New Stack. “Otherwise, it will lead to chaos for many reasons,” Thurai said. “This can also lead to huge infrastructure costs if the models are not trained properly. So far, many developers use random techniques and procedures to create ML models or even LLMs. These defined processes, technologies, and procedures bring some order to the creation, deployment, and maintenance of those models.”How Tech Companies Are Readying IT Security For Quantum Computing
When preparing for PQC, a good place to start is to identify all the points of encryption in your organization. Start with sensitive areas including VPN, external server access and remote access. IT leaders should also identify the cryptographic methods you’re currently using and think about how your organization can upgrade to post-quantum standards in the future. Some encryption methods that are currently in use are particularly vulnerable to future quantum computers. For example, a method called RSA (named after Ron Rivest, Adi Shamir and Leonard Adleman, who publicly described the algorithm in 1977) encrypts a large portion of internet traffic. While this method uses prime factors that are difficult for traditional computers to decode, it’s much easier for a quantum computer. Prior to a powerful quantum computer being released, organizations will need to replace RSA. Fortunately, there are many options to do this. One is to double the number of bits current RSA encryption uses from 2048 to 4,096. This number is difficult for even quantum computers to crack. The same goes for other encryption schemes. By increasing the problem size, you can make it much harder to solve.Why MFA alone won’t protect you in the age of adversarial AI
“MFA changed the game for a long time,” said Caulfield. “But what we’ve found over the past 5 years with these recent identity attacks is that MFA can easily be defeated.” One of the greatest threats to MFA is social engineering or more personalized psychological tactics. Because people put so much of themselves online — via social media or LinkedIn — attackers have free reign to research anyone in the world. Thanks to increasingly sophisticated AI tools, stealthy threat actors can craft campaigns “at mass scale,” said Caulfield. They will initially use phishing to access a user’s primary credential, then employ AI-based outreach to trick them into sharing a second credential or take action that allows attackers into their account. Or, attackers will spam the secondary MFA SMS or push notification method causing “MFA fatigue,” when the user eventually gives in and pushes “allow.” Threat actors will also prime victims, making situations seem urgent, or fool them into thinking they’re getting legitimate messages from an IT help desk. With man-in-the-middle attacks, meanwhile, an attacker can intercept a code during transmission between user and provider.How Functional Programming Can Help You Write Efficient, Elegant Web Applications
Functional programming might seem intimidating and overly academic at first,
but once you get the hang of it, it's a game-changer and a lot of fun on top
of it! To better understand how functional programming can help us build more
maintainable software, let's start from the beginning and understand why a
program becomes harder and harder to maintain as it becomes more significant.
... Another advantage of pure functions is that they are easy to test for the
above reasons. There is no need to mock objects because every function depends
only on its inputs, and there is no need to set up and verify internal states
at the end of the tests because they don't have any. Finally, using immutable
data and pure functions dramatically simplifies the parallelisation of tasks
across multiple CPUs and machines on the network. For this reason, many of the
so-called "big data" solutions have adopted functional architectures. However,
there are no silver bullets in computer programming. Both the functional
approach and the object-oriented approach have tradeoffs. If your application
has a very complex mutable state that is primarily local, it may take much
work to model in a functional design.
Quote for the day:
"Hold yourself responsible for a higher standard than anybody expects of you. Never excuse yourself." -- Henry Ward Beecher
AI has a stupid secret: we’re still not sure how to test for human levels of intelligence
Traditional human IQ tests have long been controversial for failing to capture the multifaceted nature of intelligence, encompassing everything from language to mathematics to empathy to sense of direction. There’s an analagous problem with the tests used on AIs. There are many well established tests covering such tasks as summarising text, understanding it, drawing correct inferences from information, recognising human poses and gestures, and machine vision. Some tests are being retired, usually because the AIs are doing so well at them, but they’re so task-specific as to be very narrow measures of intelligence. For instance, the chess-playing AI Stockfish is way ahead of Magnus Carlsen, the highest scoring human player of all time, on the Elo rating system. Yet Stockfish is incapable of doing other tasks such as understanding language. Clearly it would be wrong to conflate its chess capabilities with broader intelligence. But with AIs now demonstrating broader intelligent behaviour, the challenge is to devise new benchmarks for comparing and measuring their progress. One notable approach has come from French Google engineer François Chollet. He argues that true intelligence lies in the ability to adapt and generalise learning to new, unseen situations.How CISOs are navigating the “slippery” AI data protection problem
The problem, according to Hudson, is that policing this is “too slippery” and that as soon as businesses say no to their staff, or block access to the platforms, they simply find ways to circumvent these measures. Hudson asked a panel of CISOs at leading financial institutions in the US on how they were navigating this landscape fraught with potential privacy violations. Togai Andrews, CISO at the Bureau of US Engraving and Printing, said he had been working on developing a governance policy to allow the use of generative AI technology in a responsible way but struggled to back up this policy with effective technical controls. Andrews said this failure to enforce the policy was laid bare in a recent internal report on employee use of generative AI in the office, noting that he was virtually powerless to prevent it. “A month ago I got a report that stated about 40% of our users were using [tools like] Copilot, Grammarly, or ChatGPT to make reports and to summarize internal documents, but I had no way of stopping it.” He explained that as a result he has changed his approach to ensuring employees have a better grasp of the data risks associated with using such tools in their day-to-day workflow.Quote for the day:
"Hold yourself responsible for a higher standard than anybody expects of you. Never excuse yourself." -- Henry Ward Beecher
No comments:
Post a Comment