Daily Tech Digest - January 09, 2025

It’s remarkably easy to inject new medical misinformation into LLMs

By injecting specific information into this training set, it's possible to get the resulting LLM to treat that information as a fact when it's put to use. This can be used for biasing the answers returned. This doesn't even require access to the LLM itself; it simply requires placing the desired information somewhere where it will be picked up and incorporated into the training data. And that can be as simple as placing a document on the web. As one manuscript on the topic suggested, "a pharmaceutical company wants to push a particular drug for all kinds of pain which will only need to release a few targeted documents in [the] web." ... rather than being trained on curated medical knowledge, these models are typically trained on the entire Internet, which contains no shortage of bad medical information. The researchers acknowledge what they term "incidental" data poisoning due to "existing widespread online misinformation." But a lot of that "incidental" information was generally produced intentionally, as part of a medical scam or to further a political agenda. ... Finally, the team notes that even the best human-curated data sources, like PubMed, also suffer from a misinformation problem. The medical research literature is filled with promising-looking ideas that never panned out, and out-of-date treatments and tests that have been replaced by approaches more solidly based on evidence.


CIOs are rethinking how they use public cloud services. Here’s why.

Where are those workloads going? “There’s a renewed focus on on-premises, on-premises private cloud, or hosted private cloud versus public cloud, especially as data-heavy workloads such as generative AI have started to push cloud spend up astronomically,” adds Woo. “By moving applications back on premises, or using on-premises or hosted private cloud services, CIOs can avoid multi-tenancy while ensuring data privacy.” That’s one reason why Forrester predicts four out of five so called cloud leaders will increase their investments in private cloud by 20% this year. That said, 2025 is not just about repatriation. “Private cloud investment is increasing due to gen AI, costs, sovereignty issues, and performance requirements, but public cloud investment is also increasing because of more adoption, generative AI services, lower infrastructure footprint, access to new infrastructure, and so on,” Woo says. ... Woo adds that public cloud is costly for workloads that are data-heavy because organizations are charged both for data stored and data transferred between availability zones (AZ), regions, and clouds. Vendors also charge egress fees for data leaving as well as data entering a given AZ. “So for transfers between AZs, you essentially get charged twice, and those hidden transfer fees can really rack up,” she says. 


What CISOs Think About GenAI

“As a [CISO], I view this technology as presenting more risks than benefits without proper safeguards,” says Harold Rivas, CISO at global cybersecurity company Trellix. “Several companies have poorly adopted the technology in the hopes of promoting their products as innovative, but the technology itself has continued to impress me with its staggeringly rapid evolution.” However, hallucinations can get in the way. Rivas recommends conducting experiments in controlled environments and implementing guardrails for GenAI adoption. Without them, companies can fall victim to high-profile cyber incidents like they did when first adopting cloud. Dev Nag, CEO of support automation company QueryPal, says he had initial, well-founded concerns around data privacy and control, but the landscape has matured significantly in the past year. “The emergence of edge AI solutions, on-device inference capabilities, and private LLM deployments has fundamentally changed our risk calculation. Where we once had to choose between functionality and data privacy, we can now deploy models that never send sensitive data outside our control boundary,” says Nag. “We're running quantized open-source models within our own infrastructure, which gives us both predictable performance and complete data sovereignty.”


Scaling RAG with RAGOps and agents

To maximize their effectiveness, LLMs that use RAG also need to be connected to sources from which departments wish to pull data – think customer service platforms, content management systems and HR systems, etc. Such integrations require significant technical expertise, including experience with mapping data and managing APIs. Also, as RAG models are deployed at scale they can consume significant computational resources and generate large amounts of data. This requires the right infrastructure as well as the experience to deploy it, as well as the ability to manage data it supports across large organizations. One approach to mainstreaming RAG that has AI experts buzzing is RAGOps, a methodology that helps automate RAG workflows, models and interfaces in a way that ensures consistency while reducing complexity. RAGOps enables data scientists and engineers to automate data ingestion and model training, as well as inferencing. It also addresses the scalability stumbling block by providing mechanisms for load balancing and distributed computing across the infrastructure stack. Monitoring and analytics are executed throughout every stage of RAG pipelines to help continuously refine and improve models and operations.


Navigating Third-Party Risk in Procurement Outsourcing

Shockingly, only 57% of organisations have enterprise-wide agreements that clearly define which services can or cannot be outsourced. This glaring gap highlights the urgent need to create strong frameworks – not just for external agreements, but also for intragroup arrangements. Internal agreements, though frequently overlooked, demand the same level of attention when it comes to governance and control. Without these solid frameworks, companies are leaving themselves exposed to risks that could have been mitigated with just a little more attention to detail. Ongoing monitoring is also crucial to TPRM; organisations must actively leverage audit rights, access provisions and outcome-focused evaluations. This means assessing operational and concentration risks through severe yet plausible scenarios, ensuring they’re prepared for the worst-case while staying vigilant in everyday operations. ... As the complexity of third-party risk grows, so too does the role of AI and automation. The days of relying on spreadsheets and homegrown databases are long gone. Ed’s thoughts on this topic are unequivocal: “AI and automation are critical as third-party risk becomes increasingly complex. Significant work is required for initial risk assessments, pre-contract due diligence, post-contract monitoring, SLA reviews and offboarding.”


Five Ways Your Platform Engineering Journey Can Derail

Chernev’s first pitfall is when a company tries to start platform engineering by only changing the name of its current development practices, without doing the real work. “Simply rebranding an existing infrastructure or DevOps or SRE practice over to platform engineering without really accounting for evolving the culture within and outside the team to be product-oriented or focused” is a huge mistake ... Another major pitfall, he said, is not having and maintaining product backlogs — prioritized lists of work for the development team — that are directly targeting your developers. “For the groups who have backlogs, they are usually technology-oriented,” he said. “That misalignment in thinking across planning and missing feedback loops is unlikely to move progress forward within the organization. That ultimately leads the initiative to fail to deliver business value. Instead, they should be developer-centric,” said Chernev. ... This is another important point, said Chernev — companies that do not clearly articulate the value-add of their platform engineering charter to both technical and non-technical stakeholders inside their operations will not fully be able to reap the benefits of the platform’s use across the business.


Building generative AI applications is too hard, developers say

Given the number of tools they need to do their job, it’s no surprise that developers are loath to spend a lot of time adding another to their arsenal. Two thirds of them are only willing to invest two hours or less in learning a new AI development tool, with a further 22% allocating three to five hours, and only 11% giving more than five hours to the task. And on the whole, they don’t tend to explore new tools very often — only 21% said they check out new tools monthly, while 78% do so once every one to six months, and the remaining 2% rarely or never. The survey found that they tend to look at around six new tools each time. ... The survey highlights the fact that, while AI and generative AI are becoming increasingly important to businesses, the tools and techniques require to develop them are not keeping up. “Our survey results shed light on what we can do to help address the complexity of AI development, as well as some tools that are already helping,” Gunnar noted. “First, given the pace of change in the generative AI landscape, we know that developers crave tools that are easy to master.” And, she added, “when it comes to developer productivity, the survey found widespread adoption and significant time savings from the use of AI-powered coding tools.”


AI infrastructure – The value creation battleground

Scaling AI infrastructure isn’t just about adding more GPUs or building larger data centers – it’s about solving fundamental bottlenecks in power, latency, and reliability while rethinking how intelligence is deployed. AI mega clusters are engineering marvels – data centers capable of housing hundreds of thousands of GPUs and consuming gigawatts of power. These clusters are optimized for machine learning workloads with advanced cooling systems and networking architectures designed for reliability at scale. Consider Microsoft’s Arizona facility for OpenAI: with plans to scale up to 1.5 gigawatts across multiple sites, it demonstrates how these clusters are not just technical achievements but strategic assets. By decentralizing compute across multiple data centers connected via high-speed networks, companies like Google are pioneering asynchronous training methods to overcome physical limitations such as power delivery and network bandwidth. Scaling AI is an energy challenge. AI workloads already account for a growing share of global data center power demand, which is projected to double by 2026. This creates immense pressure on energy grids and raises urgent questions about sustainability.


4 Leadership Strategies For Managing Teams In The Metaverse

Leaders must develop new skills and adopt innovative strategies to thrive in the metaverse. Here are some key approaches:Invest in digital literacy—Leaders must become fluent in the tools and technologies that power the metaverse. This includes understanding VR/AR platforms, blockchain applications and collaborative software such as Slack, Trello and Figma. Emphasize inclusivity—The metaverse has the potential to democratize access to opportunities, but only if it’s designed with inclusivity in mind. Leaders should ensure that virtual spaces are accessible to employees of all abilities and backgrounds. This might include providing hardware like VR headsets or ensuring platforms support diverse communication styles. Create rituals for connection—Leaders can foster connection through virtual rituals and gatherings in the absence of physical offices. These activities, from weekly team check-ins to informal virtual “watercooler” chats, help build camaraderie and maintain a sense of community. Focus on well-being—Effective leaders prioritize employee well-being by setting clear boundaries, encouraging breaks and supporting mental health.


How AI will shape work in 2025 — and what companies should do now

“The future workforce will likely collaborate more closely with AI tools. For example, marketers are already using AI to create more personalized content, and coders are leveraging AI-powered code copilots. The workforce will need to adapt to working alongside AI, figuring out how to make the most of human strengths and AI’s capabilities. “AI can also be a brainstorming partner for professionals, enhancing creativity by generating new ideas and providing insights from vast datasets. Human roles will increasingly focus on strategic thinking, decision-making, and emotional intelligence. ... “Companies should focus on long-term strategy, quality data, clear objectives, and careful integration into existing systems. Start small, scale gradually, and build a dedicated team to implement, manage, and optimize AI solutions. It’s also important to invest in employee training to ensure the workforce is prepared to use AI systems effectively. “Business leaders also need to understand how their data is organized and scattered across the business. It may take time to reorganize existing data silos and pinpoint the priority datasets. To create or effectively implement well-trained models, businesses need to ensure their data is organized and prioritized correctly.



Quote for the day:

"The world is starving for original and decisive leadership." -- Bryant McGill

Daily Tech Digest - January 08, 2025

GenAI Won’t Work Until You Nail These 4 Fundamentals

Too often, organizations leap into GenAI fueled by excitement rather than strategic intent. The urgency to appear innovative or keep up with competitors drives rushed implementations without distinct goals. They see GenAI as the “shiny new [toy],” as Kevin Collins, CEO of Charli AI, aptly puts it, but the reality check comes hard and fast: “Getting to that shiny new toy is expensive and complicated.” This rush is reflected in over 30,000 mentions of AI on earnings calls in 2023 alone, signaling widespread enthusiasm but often without the necessary clarity of purpose. ... The shortage of strategic clarity isn’t the only roadblock. Even when organizations manage to identify a business case, they often find themselves hamstrung by another pervasive issue: their data. Messy data hampers organizations’ ability to mature beyond entry-level use cases. Data silos, inconsistent formats and incomplete records create bottlenecks that prevent GenAI from delivering its promised value. ... Weak or nonexistent governance structures expose companies to various ethical, legal and operational risks that can derail their GenAI ambitions. According to data from an Info-Tech Research Group survey, only 33% of GenAI adopters have implemented clear usage policies. 


Inside the AI Data Cycle: Understanding Storage Strategies for Optimised Performance

The AI Data Cycle is a six-stage framework, beginning with the gathering and storing of raw data. In this initial phase, data is collected from multiple sources, with a focus on assessing its quality and diversity, which establishes a strong foundation for the stages that follow. For this phase, high-capacity enterprise hard disk drives (eHDDs) are recommended, as they provide high storage capacity and cost-effectiveness per drive. In the next stage, data is prepared for ingestion, and this is where insight from the initial data collection phase is processed, cleaned and transformed for model training. To support this phase, data centers are upgrading their storage infrastructure – such as implementing fast data lakes – to streamline data preparation and intake. At this point, high-capacity SSDs play a critical role, either augmenting existing HDD storage or enabling the creation of all-flash storage systems for faster, more efficient data handling. Next is the model training phase, where AI algorithms learn to make accurate predictions using the prepared training data. This stage is executed on high-performance supercomputers, which require specialised, high-performing storage to function optimally. 


Buy or Build: Commercial Versus DIY Network Automation

DIY automation can be tailored to your specific network and, in some cases, to meet security or compliance requirements more easily than vendor products. And they come at a great price: free! The cost of a commercial tool is sometimes higher than the value it creates, especially if you have unusual use cases. But DIY tools take time to build and support. Over 50% of organizations in EMA’s survey spend 6-20 hours per week debugging and supporting homegrown tools. Cultural preferences also come into play. While engineers love to grumble about vendors and their products, that doesn’t mean they prefer DIY. In my experience, NetOps teams are often set in their ways, preferring manual processes that do not scale up to match the complexity of modern networks. Many network engineers do not have the coding skills to build good automation, and most don't think about how to tackle problems with automation broadly. The first and most obvious fix for the issues holding back automation is simply for automation tools to get better. They must have broad integrations and be vendor neutral. Deep network mapping capabilities help resolve the issue of legacy networks and reduce the use cases that require DIY. Low or no-code tools help ease budget, staffing, and skills issues.


How HR can lead the way in embracing AI as a catalyst for growth

Common workplace concerns include job displacement, redundancy, bias in AI decision-making, output accuracy, and the handling of sensitive data. Tracy notes that these are legitimate worries that HR must address proactively. “Clear policies are essential. These should outline how AI tools can be used, especially with sensitive data, and safeguards must be in place to protect proprietary information,” she explains. At New Relic, open communication about AI integration has built trust. AI is viewed as a tool to eliminate repetitive tasks, freeing time for employees to focus on strategic initiatives. For instance, their internally developed AI tools support content drafting and research, enabling leaders like Tracy to prioritize high-value activities, such as driving organizational strategy. “By integrating AI thoughtfully and transparently, we’ve created an environment where it’s seen as a partner, not a threat,” Tracy says. This approach fosters trust and positions AI as an ally in smarter, more secure work practices. The key is to highlight how AI can help everyone excel in their roles and elevate the work they do every day. While it’s realistic to acknowledge that some aspects of our jobs—or even certain roles—may evolve with AI, the focus should be on how we integrate it into our workflow and use it to amplify our impact and efficiency,” notes Tracy.


Cloud providers are running out of ‘next big things’

Yes, every cloud provider is now “an AI company,” but let’s be honest — they’re primarily engineering someone else’s innovations into cloud-consumable services. GPT-4 through Microsoft Azure? That’s OpenAI’s innovation. Vector databases? They came from the open source community. Cloud providers are becoming AI implementation platforms rather than AI innovators. ... The root causes of the slowdown in innovation are clear. Market maturity indicates that the foundational issues in cloud computing have mostly been resolved. What’s left are increasingly specialized niche cases. Second, AWS, Azure, and Google Cloud are no longer the disruptors — they’re the defenders of market share. Their focus has shifted from innovation to optimization and retention. A defender’s mindset manifests itself in product strategies. Rather than introducing revolutionary new services, cloud providers are fine-tuning existing offerings. They’re also expanding geographically, with the hyperscalers expected to announce 30 new regions in 2025. However, these expansions are driven more by data sovereignty requirements than innovative new capabilities. This innovation slowdown has profound implications for enterprises. Many organizations bet their digital transformation on cloud-native architectures with continuous innovation. 


Historical Warfare’s Parallels with Cyber Warfare

In 1942, the British considered Singapore nearly impregnable. They fortified its coast heavily, believing any attack would come from the sea. Instead, the Japanese stunned the defenders by advancing overland through dense jungle terrain the British deemed impassable. This unorthodox approach using bicycles in great numbers and small tracks through the jungle enabled the Japanese forces to hit the defences at the weakest point and well ahead of the projected time catching the British defences off guard. In cybersecurity, this corresponds to zero-day vulnerabilities and unconventional attack vectors. Hackers exploit flaws that defenders never saw coming, turning supposedly secure systems into easy marks. The key lesson is to never to grow complacent because you never know what you can be hit with and when. ... Cyber attackers also use psychology against their targets. Phishing emails appeal to curiosity, trust, greed, or fear thus luring victims into clicking malicious links or revealing passwords. Social engineering exploits human nature rather than code and defenders must recognise that people, not just machines, are the frontline. Regular training, clear policies, and an ingrained culture of healthy scepticism which is present in most IT staff can thwart even the most artful psychological ploys.


Insider Threat: Tackling the Complex Challenges of the Enemy Within

Third-party background checking can only go so far. It must be supported by old fashioned and experienced interview techniques. Omri Weinberg, co-founder and CRO at DoControl, explains his methodology “We’re primarily concerned with two types of bad actors. First, there are those looking to use the company’s data for nefarious purposes. These individuals typically have the skills to do the job and then some – they’re often overqualified. They pose a severe threat because they can potentially access and exploit sensitive data or systems.” The second type includes those who oversell their skills and are actually under or way underqualified. “While they might not have malicious intent, they can still cause significant damage through incompetence or by introducing vulnerabilities due to their lack of expertise. For the overqualified potential bad actors, we’re wary of candidates whose skills far exceed the role’s requirements without a clear explanation. For the underqualified group, we look for discrepancies between claimed skills and actual experience or knowledge during interviews.” This means it is important to probe the candidate during the interview to gauge the true skill level of the candidate. “it’s essential that the person evaluating the hire has the technical expertise to make these determinations,” he added.


Raise your data center automation game with easy ecosystem integration

If integrations are the key, then the things you look for to understand whether a product is flashy or meaningful should change. The UI matters, but the way tools are integrated is the truly telling characteristic. What APIs exist? How is data normalized? Are interfaces versioned and maintained across different releases? Can you create complex dashboards that pull things together from different sources using no-code models that don't require source access to contextualize your environment? How are workflows strung together into more complex operations? By changing your focus, you can start to evaluate these platforms based on how well they integrate rather than on how snazzy the time series database interface is. Of course, things like look and feel matter, but anyone who wants to scale their operations will realize that the UI might not even be the dominant consumption model over time. Is your team looking to click their way through to completion? ... Wherever you are in this discovery process, let me offer some simple advice: Expand your purview from the network to the ecosystem and evaluate your options in the context of that ecosystem. When you do that effectively, you should know which solutions are attractive but incremental and which are likely to create more durable value for you and your organization.


Why Scrum Masters Should Grow Their Agile Coaching Skills

More than half of the organizations surveyed report that finding scrum masters with the right combination of skills to meet their evolving demands is very challenging. Notably, 93% of companies seek candidates with strong coaching skills but state that it’s one of the skills hardest to find. Building strong coaching and facilitation skills can help you stand out in the job market and open doors to new career opportunities. As scrum masters are expected to take on increasingly strategic roles, your skills become even more valuable. Senior scrum masters, in particular, are called upon to handle politically sensitive and technically complex situations, bridging gaps between development teams and upper management. Coaching and facilitation skills are requested nearly three times more often for senior scrum master roles than for other positions. Growing these coaching competencies can give you an edge and help you make a bigger impact in your career. ... Who wouldn’t want to move up in their career into roles with greater responsibilities and bigger impact? Regardless of the area of the company you’re in—product, sales, marketing, IT, operations—you’ll need leadership skills to guide people and enable change within the organization. 


Scaling penetration testing through smart automation

Automation undoubtedly has tremendous potential to streamline the penetration testing lifecycle for MSSPs. The most promising areas are the repetitive, data-intensive, and time-consuming aspects of the process. For instance, automated tools can cross-reference vulnerabilities against known exploit databases like CVE, significantly reducing manual research time. They can enhance accuracy by minimizing human error in tasks like calculating CVSS scores. Automation can also drastically reduce the time required to compile, format, and standardize pen-testing reports, which can otherwise take hours or even days depending on the scope of the project. For MSSPs handling multiple client engagements, this could translate into faster project delivery cycles and improved operational efficiency. For their clients – it enables near real-time responses to vulnerabilities, reducing the window of exposure and bolstering their overall security posture. However – and this is crucial – automation should not be treated as a silver bullet. Human expertise remains absolutely indispensable in the testing itself. The human ability to think creatively, to understand complex system interactions, to develop unique attack scenarios that an algorithm might miss—these are irreplaceable. 



Quote for the day:

"Don't judge each day by the harvest you reap but by the seeds that you plant." -- Robert Louis Stevenson

Daily Tech Digest - January 07, 2025

With o3 having reached AGI, OpenAI turns its sights toward superintelligence

One of the challenges of achieving AGI is defining it. As of yet, researchers and the broader industry do not have a concrete description of what it will be and what it will be able to do. The general consensus, though, is that AGI will possess human-level intelligence, be autonomous, have self-understanding, and will be able to “reason” and perform tasks that it was not trained to do. ... Going beyond AGI, “superintelligence” is generally understood to be AI systems that far surpass human intelligence. “With superintelligence, we can do anything else,” Altman wrote. “Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own.” He added, “this sounds like science fiction right now, and somewhat crazy to even talk about it.” However, “we’re pretty confident that in the next few years, everyone will see what we see,” he said, emphasizing the need to act “with great care” while still maximizing benefit. ... OpenAI set out to build AGI from its founding in 2015, when the concept of AGI, as Altman put it to Bloomberg, was “nonmainstream.” “We wanted to figure out how to build it and make it broadly beneficial,” he wrote in his blog post. 


Bridging the execution gap – why AI is the new frontier for corporate strategy

Imagine a future where leadership teams are not constrained by outdated processes but empowered by intelligent systems. In this world, CEOs use AI to visualise their entire organisation’s alignment, ensuring every department contributes to strategic goals. Middle managers leverage real-time insights to adapt plans dynamically, while employees understand how their work drives the company’s mission forward. Such an environment fosters resilience, innovation, and engagement. By turning strategy into a living, breathing entity, organisations can adapt to challenges and seize opportunities faster than ever before. The road to this future is not without challenges. Leaders must embrace cultural change, invest in the right technologies, and commit to continuous learning. But the rewards – a thriving, agile organisation capable of navigating the complexities of the modern business landscape – are well worth the effort. The execution gap has plagued organisations for decades, but the tools to overcome it are now within reach. AI is more than a technological advancement; it is the key to unlocking the full potential of corporate strategy. By embracing adaptability and leveraging AI’s transformative capabilities, businesses can ensure their strategies do not just survive but thrive in the face of change.


Google maps the future of AI agents: Five lessons for businesses

Google argues that AI agents represent a fundamental departure from traditional language models. While models like GPT-4o or Google’s Gemini excel at generating single-turn responses, they are limited to what they’ve learned from their training data. AI agents, by contrast, are designed to interact with external systems, learn from real-time data and execute multi-step tasks. “Knowledge [in traditional models] is limited to what is available in their training data,” the paper notes. “Agents extend this knowledge through the connection with external systems via tools.” This difference is not just theoretical. Imagine a traditional language model tasked with recommending a travel itinerary. ... At the heart of an AI agent’s capabilities is its cognitive architecture, which Google describes as a framework for reasoning, planning and decision-making. This architecture, known as the orchestration layer, allows agents to process information in cycles, incorporating new data to refine their actions and decisions. Google compares this process to a chef preparing a meal in a busy kitchen. The chef gathers ingredients, considers the customer’s preferences and adapts the recipe as needed based on feedback or ingredient availability. Similarly, an AI agent gathers data, reasons about its next steps and adjusts its actions to achieve a specific goal.


AI agents will change work forever. Here's how to embrace that transformation

The business world is full of orthodoxies, beliefs that no one questions because they are thought to be "just the way things are". One such orthodoxy is the phrase: "Our people are the difference". A simple Google search can attest to its popularity. Some companies use this orthodoxy as their official or unofficial tagline, a tribute to their employees that they hope sends the right message internally and externally. They hope their employees feel special and customers take this orthodoxy as proof of their human goodness. Other firms use this orthodoxy as part of their explanation of what makes their company different. It's part of their corporate story. It sounds nice, caring, and positive. The only problem is that this orthodoxy is not true. ... Another way to put this is that individual employees are not fixed assets. They do not behave the same way in all conditions. In most cases, employees are adaptable and can absorb and respond to change. The environment, conditions, and potential for relationships cause this capacity to express itself. So, on the one hand, one company's employees are the same as any other company's employees in the same industry. They move from company to company, read the same magazines, attend similar conventions, and learn the same strategies and processes.


Gen AI is transforming the cyber threat landscape by democratizing vulnerability hunting

Identifying potential vulnerabilities is one thing, but writing exploit code that works against them requires a more advanced understanding of security flaws, programming, and the defense mechanisms that exist on the targeted platforms. ... This is one area where LLMs could make a significant impact: bridging the knowledge gap between junior bug hunters and experienced exploit writers. Even generating new variations of existing exploits to bypass detection signatures in firewalls and intrusion prevention systems is a notable development, as many organizations don’t deploy available security patches immediately, instead relying on their security vendors to add detection for known exploits until their patching cycle catches up. ... “AI tools can help less experienced individuals create more sophisticated exploits and obfuscations of their payloads, which aids in bypassing security mechanisms, or providing detailed guidance for exploiting specific vulnerabilities,” NiÈ›escu said. “This, indeed, lowers the entry barrier within the cybersecurity field. At the same time, it can also assist experienced exploit developers by suggesting improvements to existing code, identifying novel attack vectors, or even automating parts of the exploit chain. This could lead to more efficient and effective zero-day exploits.”


GDD: Generative Driven Design

The independent and unidirectional relationship between agentic platform/tool and codebase that defines the Doctor-Patient strategy is also the greatest limiting factor of this strategy, and the severity of this limitation has begun to present itself as a dead end. Two years of agentic tool use in the software development space have surfaced antipatterns that are increasingly recognizable as “bot rot” — indications of poorly applied and problematic generated code. Bot rot stems from agentic tools’ inability to account for, and interact with, the macro architectural design of a project. These tools pepper prompts with lines of context from semantically similar code snippets, which are utterly useless in conveying architecture without a high-level abstraction. Just as a chatbot can manifest a sensible paragraph in a new mystery novel but is unable to thread accurate clues as to “who did it”, isolated code generations pepper the codebase with duplicated business logic and cluttered namespaces. With each generation, bot rot reduces RAG effectiveness and increases the need for human intervention. Because bot rotted code requires a greater cognitive load to modify, developers tend to double down on agentic assistance when working with it, and in turn rapidly accelerate additional bot rotting.


Someone needs to make AI easy

Few developers did a better job of figuring out how to effectively use AI than Simon Willison. In his article “Things we learned about LLMs in 2024,” he simultaneously susses out how much happened in 2024 and why it’s confusing. For example, we’re all told to aggressively use genAI or risk falling behind, but we’re awash in AI-generated “slop” that no one really wants to read. He also points out that LLMs, although marketed as the easy path to AI riches for all who master them, are actually “chainsaws disguised as kitchen knives.” He explains that “they look deceptively simple to use … but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.” If anything, this quagmire got worse in 2024. Incredibly smart people are building incredibly sophisticated systems that leave most developers incredibly frustrated by how to use them effectively.  ... Some of this stems from the inability to trust AI to deliver consistent results, but much of it derives from the fact that we keep loading developers up with AI primitives (similar to cloud primitives like storage, networking, and compute) that force them to do the heavy lifting of turning those foundational building blocks into applications.


Making the most of cryptography, now and in the future

The mathematicians and cryptographers who have worked on these NIST algorithms expect them to last a long time. Thousands of people have already tried to poke holes into them and haven’t yet made any meaningful progress toward defeating them. So, they are “probably” OK for the time being. But as much as we would like to, we cannot mathematically rule out that they cannot be broken. This means that for commercial enterprises looking to migrate to new cryptography, they should be braced to change again and again — whether that is in five years, 10 years, or 50 years. ... Up until now most cryptography was mostly implicit and not under direct control of the management. Putting more controls around cryptography would not only safeguard data today, but it would provide the foundation to make the next transition easier. ... Cryptography is full of single points of failure. Even if your algorithm is bulletproof, you might end up with a faulty implementation. Agility helps us move away from these single points of failure, allowing us to adapt quickly if an algorithm is compromised. It is therefore crucial for CISOs to start thinking about agility and redundancy.


Data 2025 outlook: AI drives a renaissance of data

Though not all the technology building blocks are in place, many already are. Using AI to crawl and enrich metadata? Automatically generate data pipelines? Using regression analysis to flag data and model drift? Using entity extraction to flag personally identifiable information or summarize the content of structured or unstructured data? Applying machine learning to automate data quality resolution and data classification? Applying knowledge graphs to RAG? You get the idea. There are a few technology gaps that we expect will be addressed in 2025, including automating the correlation between data and model lineage, assessing the utility and provenance of unstructured data, and simplifying generation of vector embeddings. We expect in the coming year that bridging data file and model lineage will become commonplace with AI governance tools and services. And we’ll likely look to emerging approaches such as data observability to transform data quality practices from reactive to proactive. Let’s start with governance. In the data world, this is hardly a new discipline. Though data governance over the years has drawn more lip service than practice, for structured data, the underlying technologies for managing data quality, privacy, security and compliance are arguably more established than for AI. 


Beware the Rise of the Autonomous Cyber Attacker

Research has already shown that teams of AIs working together can find and exploit zero-day vulnerabilities. A team at the University of Illinois Urbana-Champaign created a “task force” of AI agents that worked as a supervised unit and effectively exploited vulnerabilities they had no prior knowledge of. In a recent report, OpenAI also cited three threat actors that used ChatGPT to discover vulnerabilities, research targets, write and debug malware and setup command and control infrastructure. The company said the activity offered these groups “limited, incremental (new) capabilities” to carry out malicious cyber tasks. ... “Darker” AI use has, in part, prompted many of today’s top thinkers to support regulations. This year, OpenAI CEO Sam Altman said: “I’m not interested in the killer robots walking on the street … things going wrong. I’m much more interested in the very subtle societal misalignments, where we just have these systems out in society and through no particular ill intention, things go horribly wrong.” ... Theoretically, regulation may reduce unintended or dangerous use among legitimate users, but I’m certain that the criminal economy will appropriate this technology. As CISOs deploy AI more broadly, attackers’ abilities will concurrently soar.



Quote for the day:

"Leadership is a dynamic process that expresses our skill, our aspirations, and our essence as human beings." -- Catherine Robinson-Walker

Daily Tech Digest - January 06, 2025

Should States Ban Mandatory Human Microchip Implants?

“U.S. states are increasingly enacting legislation to pre-emptively ban employers from forcing workers to be ‘microchipped,’ which entails having a subdermal chip surgically inserted between one’s thumb and index finger," wrote the authors of the report. "Internationally, more than 50,000 people have elected to receive microchip implants to serve as their swipe keys, credit cards, and means to instantaneously share social media information. This technology is especially popular in Sweden, where chip implants are more widely accepted to use for gym access, e-tickets on transit systems, and to store emergency contact information.” ... “California-based startup Science Corporation thinks that an implant using living neurons to connect to the brain could better balance safety and precision," Singularity Hub wrote. "In recent non-peer-reviewed research posted on bioarXiv, the group showed a prototype device could connect with the brains of mice and even let them detect simple light signals.” That same piece quotes Alan Mardinly, who is director of biology at Science Corporation, as saying that the advantages of a biohybrid implant are that it "can dramatically change the scaling laws of how many neuros you can interface with versus how much damage you do to the brain."


AI revolution drives demand for specialized chips, reshaping global markets

There’s now a shift toward smaller AI models that only use internal corporate data, allowing for more secure and customizable genAI applications and AI agents. At the same time, Edge AI is taking hold, because it allows AI processing to happen on devices (including PCs, smartphones, vehicles and IoT devices), reducing reliance on cloud infrastructure and spurring demand for efficient, low-power chips. “The challenge is if you’re going to bring AI to the masses, you’re going to have to change the way you architect your solution; I think this is where Nvidia will be challenged because you can’t use a big, complex GPU to address endpoints,” said Mario Morales, a group vice president at research firm IDC. “So, there’s going to be an opportunity for new companies to come in — companies like Qualcomm, ST Micro, Renesas, Ambarella and all these companies that have a lot of the technology, but now it’ll be about how to use it. ... Enterprises and other organizations are also shifting their focus from single AI models to multimodal AI, or LLMs capable of processing and integrating multiple types of data or “modalities,” such as text, images, audio, video, and sensory input. The input from diverse resources creates a more comprehensive understanding of that data and enhances performance across tasks.


How to Address an Overlooked Aspect of Identity Security: Non-human Identities

Compromised identities and credentials are the No. 1 tactic for cyber threat actors and ransomware campaigns to break into organizational networks and spread and move laterally. Identity is the most vulnerable element in an organization’s attack surface because there is a significant misperception around what identity infrastructure (IDP, Okta, and other IT solutions) and identity security providers (PAM, MFA, etc.) can protect. Each solution only protects the silo that it is set up to secure, not an organization’s complete identity landscape, including human and non-human identities (NHIs), privileged and non-privileged users, on-prem and cloud environments, IT and OT infrastructure, and many other areas that go unmanaged and unprotected. ... Most organizations use a combination of on-prem management tools, a mix of one or more cloud identity providers (IdPs), and a handful of identity solutions (PAM, IGA) to secure identities. But each tool operates in a silo, leaving gaps and blind spots that cause increased attacks and blind spots. 8 out of 10 organizations cannot prevent the misuse of service accounts in real-time due to visibility and security being sporadic or missing. NHIs fly under the radar as security and identity teams sometimes don’t even know they exist. 


Version Control in Agile: Best Practices for Teams

With multiple developers working on different features, fixes, or updates simultaneously, it’s easy for code to overlap or conflict without clear guidelines. Having a structured branching approach prevents confusion and minimizes the risk of one developer’s work interfering with another’s. ... One of the cornerstones of good version control is making small, frequent commits. In Agile development, progress happens in iterations, and version control should follow that same mindset. Large, infrequent commits can cause headaches when it’s time to merge, increasing the chances of conflicts and making it harder to pinpoint the source of issues. Small, regular commits, on the other hand, make it easier to track changes, test new functionality, and resolve conflicts early before they grow into bigger problems. ... An organized repository is crucial to maintaining productivity. Over time, it’s easy for the repository to become cluttered with outdated branches, unnecessary files, or poorly named commits. This clutter slows down development, making it harder for team members to navigate and find what they need. Teams should regularly review their repositories and remove unused branches or files that are no longer relevant. 


Abusing MLOps platforms to compromise ML models and enterprise data lakes

Machine learning operations (MLOps) is the practice of deploying and maintaining ML models in a secure, efficient and reliable way. The goal of MLOps is to provide a consistent and automated process to be able to rapidly get an ML model into production for use by ML technologies. ... There are several well-known attacks that can be performed against the MLOps lifecycle to affect the confidentiality, integrity and availability of ML models and associated data. However, performing these attacks against an MLOps platform using stolen credentials has not been covered in public security research. ... Data poisoning: This attack involves an attacker having access to the raw data being used in the “Design” phase of the MLOps lifecycle to include attacker-provided data or being able to directly modify a training dataset. The goal of a data poisoning attack is to be able to influence the data that is being trained in an ML model and eventually deployed to production. ... Model extraction attacks involve the ability of an attacker to steal a trained ML model that is deployed in production. An attacker could use a stolen model to extract sensitive training data such as the training weights used, or to use the predictive capabilities used in the model for their own financial gain. 


Get Going With GitOps

GitOps implementations have a significant impact on infrastructure automation by providing a standardized, repeatable process for managing infrastructure as code, Rose says. The approach allows faster, more reliable deployments and simplifies the maintenance of infrastructure consistency across diverse environments, from development to production. "By treating infrastructure configurations as versioned artifacts in Git, GitOps brings the same level of control and automation to infrastructure that developers have enjoyed with application code." ... GitOps' primary benefit is its ability to enable peer review for configuration changes, Peele says. "It fosters collaboration and improves the quality of application deployment." He adds that it also empowers developers -- even those without prior operations experience -- to control application deployment, making the process more efficient and streamlined. Another benefit is GitOps' ability to allow teams to push minimum viable changes more easily, thanks to faster and more frequent deployments, says Siri Varma Vegiraju, a Microsoft software engineer. "Using this strategy allows teams to deploy multiple times a day and quickly revert changes if issues arise," he explains via email. 


Balancing proprietary and open-source tools in cyber threat research

First, it is important to assess the requirements of an organization by identifying the capabilities needed, such as threat intelligence platforms or malware analysis tools. Next, evaluating open-source tools which can be cost-effective and customizable, but may require community support and frequent updates. In contrast, proprietary tools could offer advanced features, dedicated support, and better integration with other products. Finally, think about scalability and flexibility, as future growth may necessitate scalable solutions. ... The technology is not magic, but it is a powerful tool to speed up processes and bolster security procedures while also reducing the gap between advanced and junior analysts. However, as of today, the technology still requires verification and validation. Globally, the need for security experts with a dual skill set in security and AI will be in high demand. Because the adoption of generative AI systems increases, we need people who understand these technologies because threat actors are also learning. ... If a CISO needs to evaluate effectiveness of these tools, they first need to understand their needs and pain points and then seek guidance from experts. Adopting generative AI security solutions just because it is the latest trend is not the right approach.


Get your IT infrastructure AI-ready

Artificial intelligence adoption is a challenge many CIOs grapple with as they look to the future. Before jumping in, their teams must possess practical knowledge, skills, and resources to implement AI effectively. ... AI implementation is costly and the training of AI models requires a substantial investment. "To realize the potential, you have to pay attention to what it's going to take to get it done, how much it's going to cost, and make sure you're getting a benefit," Ramaswami said. "And then you have to go get it done." GenAI has rapidly transformed from an experimental technology to an essential business tool, with adoption rates more than doubling in 2024, according to a recent study by AI at Wharton ... According to Donahue, IT teams are exploring three key elements: choosing language models, leveraging AI from cloud services, and building a hybrid multicloud operating model to get the best of on-premise and public cloud services. "We're finding that very, very, very few people will build their own language model," he said. "That's because building a language model in-house is like building a car in the garage out of spare parts." Companies look to cloud-based language models, but must scrutinize security and governance capabilities while controlling cost over time. 


What is an EPMO? Your organization’s strategy navigator

The key is to ensure the entire strategy lifecycle is set up for success rather than endlessly iterating to perfect strategy execution. Without properly defining, governing, and prioritizing initiatives upfront, even the best delivery teams will struggle to achieve business goals in a way that drives the right return for the organization’s investment. For most organizations, there’s more than one gap preventing desired results. ... The EPMO’s job is to strip away unnecessary complexity and create frameworks that empower teams to deliver faster, more effectively, and with greater focus. PMO leaders should ask how this process helps to hit business goals faster. So by eliminating redundant meetings and scaling governance to match project size and risk, delivery timelines can shorten. This kind of targeted adjustment keeps momentum high without sacrificing quality or control. ... For an EPMO to be effective, ideally it needs to report directly to the C-suite. This matters because proximity equals influence. When the EPMO has visibility at the top, it can drive alignment across departments, break down silos, drive accountability, and ensure initiatives stay connected to overall business objectives serving as the strategy navigator for the C-suite.


Data Center Hardware in 2025: What’s Changing and Why It Matters

DPUs can handle tasks like network traffic management, which would otherwise fall to CPUs. In this way, DPUs reduce the load placed on CPUs, ultimately making greater computing capacity available to applications. DPUs have been around for several years, but they’ve become particularly important as a way of boosting the performance of resource-hungry workloads, like AI training, by completing AI accelerators. This is why I think DPUs are about to have their moment. ... Recent events have underscored the risk of security threats linked to physical hardware devices. And while I doubt anyone is currently plotting to blow up data centers by placing secret bombs inside servers, I do suspect there are threat actors out there vying to do things like plant malicious firmware on servers as a way of creating backdoors that they can use to hack into data centers. For this reason, I think we’ll see an increased focus in 2025 on validating the origins of data center hardware and ensuring that no unauthorized parties had access to equipment during the manufacturing and shipping processes. Traditional security controls will remain important, too, but I’m betting on hardware security becoming a more intense area of concern in the year ahead.



Quote for the day:

"Nothing in the world is more common than unsuccessful people with talent." -- Anonymous

Daily Tech Digest - January 05, 2025

Phantom data centers: What they are (or aren’t) and why they’re hampering the true promise of AI

Fake data centers represent an urgent bottleneck in scaling data infrastructure to keep up with compute demand. This emerging phenomenon is preventing capital from flowing where it actually needs to. Any enterprise that can help solve this problem — perhaps leveraging AI to solve a problem created by AI — will have a significant edge. ... As utilities struggle to sort fact from fiction, the grid itself becomes a bottleneck. McKinsey recently estimated that global data center demand could reach up to 152 gigawatts by 2030, adding 250 terawatt-hours of new electricity demand. In the U.S., data centers alone could account for 8% of total power demand by 2030, a staggering figure considering how little demand has grown in the last two decades. Yet, the grid is not ready for this influx. Interconnection and transmission issues are rampant, with estimates suggesting the U.S. could run out of power capacity by 2027 to 2029 if alternative solutions aren’t found. Developers are increasingly turning to on-site generation like gas turbines or microgrids to avoid the interconnection bottleneck, but these stopgaps only serve to highlight the grid’s limitations.


Understanding And Preparing For The 7 Levels Of AI Agents

Task-specialized agents excel in somewhat narrow domains, often outperforming humans in specific tasks by collaborating with domain experts to complete well-defined activities. These agents are the backbone of many modern AI applications, from fraud detection algorithms to medical imaging systems. Their origins trace back to the expert systems of the 1970s and 1980s, like MYCIN, a rule-based system for diagnosing infections. ... Context-aware agents distinguish themselves by their ability to handle ambiguity, dynamic scenarios, and synthesize a variety of complex inputs. These agents analyze historical data, real-time streams, and unstructured information to adapt and respond intelligently, even in unpredictable scenarios. ... The idea of self-reflective agents ventures into speculative territory. These systems would be capable of introspection and self-improvement. The concept has roots in philosophical discussions about consciousness, first introduced by Alan Turing in his early work on machine intelligence and later explored by thinkers like David Chalmers. Self-reflective agents would analyze their own decision-making processes and refine their algorithms autonomously, much like a human reflects on past actions to improve future behavior.


The 7 Key Software Testing Principles: Why They Matter and How They Work in Practice

Identifying defects early in the software development lifecycle is critical because the cost and effort to fix issues grow exponentially as development progresses. Early testing not only minimizes these risks but also streamlines the development process by addressing potential problems when they are most manageable and least expensive. This proactive approach saves time, reduces costs, and ensures a smoother path to delivering high-quality software. ... The pesticide paradox suggests that repeatedly running the same set of tests will not uncover new or previously unknown defects. To continue identifying issues effectively, test methodologies must evolve by incorporating new tests, updating existing test cases, or modifying test steps. This ongoing refinement ensures that testing remains relevant and capable of discovering previously hidden problems. ... Test strategies must be tailored to the specific context of the software being tested. The requirements for different types of software—such as a mobile app, a high-transaction e-commerce website, or a business-critical enterprise application—vary significantly. As a result, testing methodologies should be customized to address the unique needs of each type of application, ensuring that testing is both effective and relevant to the software's intended use and environment.


This Year, RISC-V Laptops Really Arrive

DeepComputing is now working in partnership with Framework, a laptop maker founded in 2019 with the mission to “fix consumer electronics,” as it’s put on the company’s website. Framework sells modular, user-repairable laptops that owners can keep indefinitely, upgrading parts (including those that can’t usually be replaced, like the mainboard and display) over time. “The Framework laptop mainboard is a place for board developers to come in and create their own,” says Patel. The company hopes its laptops can accelerate the adoption of open-source hardware by offering a platform where board makers can “deliver system-level solutions,” Patel adds, without the need to design their own laptop in-house. ... The DeepComputing DC-Roma II laptop marked a major milestone for open source computing, and not just because it shipped with Ubuntu installed. It was the first RISC-V laptop to receive widespread media coverage, especially on YouTube, where video reviews of the DC-Roma II  collectively received more than a million views. ... Balaji Baktha, Ventana’s founder and CEO, is adamant that RISC-V chips will go toe-to-toe with x86 and Arm across a variety of products. “There’s nothing that is ISA specific that determines if you can make something high performance, or not,” he says. “It’s the implementation of the microarchitecture that matters.”


The cloud architecture renaissance of 2025

First, get your house in order. The next three to six months should be spent deep-diving into current cloud spending and utilization patterns. I’m talking about actual numbers, not the sanitized versions you show executives. Map out your AI and machine learning (ML) workload projections because, trust me, they will explode beyond your current estimates. While you’re at it, identify which workloads in your public cloud deployments are bleeding money—you’ll be shocked at what you find. Next, develop a workload placement strategy that makes sense. Consider data gravity, performance requirements, and regulatory constraints. This isn’t about following the latest trend; it’s about making decisions that align with business realities. Create explicit ROI models for your hybrid and private cloud investments. Now, let’s talk about the technical architecture. The organizational piece is critical, and most enterprises get it wrong. Establish a Cloud Economics Office that combines infrastructure specialists, data scientists, financial analysts, and security experts. This is not just another IT team; it is a business function that must drive real value. Investment priorities need to shift, too. Focus on automated orchestration tools, cloud management platforms, and data fabric solutions.


How datacenters use water and why kicking the habit is nearly impossible

While dry coolers and chillers may not consume water onsite, they aren't without compromise. These technologies consume substantially more power from the local grid and potentially result in higher indirect water consumption. According to the US Energy Information Administration, the US sources roughly 89 percent of its power from natural gas, nuclear, and coal plants. Many of these plants employ steam turbines to generate power, which consumes a lot of water in the process. Ironically, while evaporative coolers are why datacenters consume so much water onsite, the same technology is commonly employed to reduce the amount of water lost to steam. Even still the amount of water consumed through energy generation far exceeds that of modern datacenters. ... Understanding that datacenters are, with few exceptions, always going to use some amount of water, there are still plenty of ways operators are looking to reduce direct and indirect consumption. One of the most obvious is matching water flow rates to facility load and utilizing free cooling wherever possible. Using a combination of sensors and software automation to monitor pumps and filters at facilities utilizing evaporative cooling, Sharp says Digital Realty has observed a 15 percent reduction in overall water usage.


Data centres in space: they’re a brilliant idea, but a herculean challenge

Data centres beyond Earth’s atmosphere would have access to continuous solar energy and could be naturally cooled by the vacuum of space. Away from terrestrial issues like planning permission, such facilities could be rapidly deployed and expanded as the demand for more data keeps increasing. It may sound like something from a sci-fi novel, but this concept has been gaining more attention as space technology has advanced and the need for sustainable and scalable data centres has become apparent. ... Space weather, such as solar flares could disrupt operations, while collisions with debris are a major worry – rather offsetting the fact that space-based data centres don’t have to fear earthquakes or floods. Advanced shielding could protect against things like radiation and micrometeoroids, but it will probably only do so much – particularly as Earth’s orbit becomes ever more crowded. To fix damaged facilities, advances in robotics and automation will of course help, but remote maintenance may not be able to address all issues. Sending repair crews remains a very complex and costly affair, and though the falling cost of space launches will again help here, it is still likely to be a huge burden for a few decades to come. In addition, disposing of data centre waste takes on a whole new level of complexity off-planet.


India’s Digital Data Protection Framework: Safety, Trust and Resilience

The draft rules cover various key areas, including the responsibilities of Data Fiduciaries, the role of Consent Managers, and protocols for State Data Processing, particularly in contexts like the distribution of subsidies and public services. They also detail measures for Breach Notifications, mechanisms for individuals to exercise their Data Rights, and special provisions for processing data related to children and persons with disabilities. The Data Protection Board, central to the enforcement of the Act, is set to function as a fully digital office, streamlining its operations and improving accessibility. Additionally, the rules outline procedures for appealing decisions through the Appellate Tribunal, ensuring accountability at every stage. One of the defining aspects of the draft rules is their alignment with the SARAL framework, which emphasises simplicity, clarity, and contextual definitions. To aid public understanding, illustrative examples and explanatory notes have been included, making the document accessible to stakeholders across industries, government bodies, and civil society. Both the draft rules and the accompanying explanatory notes are available on the MeitY website for public review and consultation. While legislative measures are being formalised, the government has swiftly addressed recent data breaches.


The Rise of AI Agents and Data-Driven Decisions

“In 2025, AI agents will take generative AI to the next level by moving beyond content creation to active participation in daily business operations,” he says. “These agents, capable of partial or full autonomy, will handle tasks like scheduling, lead qualification, and customer follow-ups, seamlessly integrating into workflows. Rather than replacing generative AI, they will enhance its utility by transforming insights into immediate, actionable outcomes.” Kawasaki emphasizes the developer-centric benefits as well. “AI agents will become faster and easier to build as low-code and no-code platforms mature, reducing the complexity of creating intelligent, AI-powered scenarios,” he says. ... “AI will play a transformative role in the fortification of cyber security by addressing challenges like scalability, prioritization and speed to detection. Unfortunately, cyber threats have become commonplace on the network and attackers are becoming more sophisticated in their methods – many times operating at a threshold that is very difficult to detect. As a result, organizations that fail to integrate an AI capability into their defense strategy risk being exposed to business-altering vulnerabilities. AI’s ability to monitor vast networks for imperceptible anomalies allows organizations to prioritize the most critical threats in real-time.”


New HIPAA Cybersecurity Rules Pull No Punches

Since the beginning, HIPAA has always been the best, yet insufficient, regulation dictating cybersecurity for the healthcare industry. "[There's] a history of the focus being in the wrong place because of the way HIPAA was laid out in the mid-1990s," says Errol Weiss, chief information security officer (CISO) of the Healthcare Information Sharing and Analysis Center (Health-ISAC). ... The newly proposed Security Rule aims to fix things up, with a laundry list of new requirements that touch on patch management, access controls, multifactor authentication (MFA), encryption, backup and recovery, incident reporting, risk assessments, compliance audits, and more. As Lawrence Pingree, vice president at Dispersive, acknowledges, "People have a love-hate relationship with regulations. But there's a lot of good that comes from HIPAA becoming a lot more prescriptive. Whenever you are more specific about the security controls that they must apply, the better off you are." ... Joseph J. Lazzarotti, principal at Jackson Lewis P.C., says provision 164.306 allowed for the kind of flexibility businesses always ask for: "That we're not expecting the same thing from every solo practitioner on Main Street in the Midwest versus the large hospital on the East Coast. There are obviously going to be different expectations for compliance."



Quote for the day:

“Do the best you can until you know better. Then when you know better, do better.” -- Maya Angelou