Showing posts with label HIPAA. Show all posts
Showing posts with label HIPAA. Show all posts

Daily Tech Digest - January 28, 2025


Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki


How Long Does It Take Hackers to Crack Modern Hashing Algorithms?

Because hashing algorithms are one-way functions, the only method to compromise hashed passwords is through brute force techniques. Cyber attackers employ special hardware like GPUs and cracking software (e.g., Hashcat, L0phtcrack, John The Ripper) to execute brute force attacks at scale—typically millions or billions or combinations at a time. Even with these sophisticated purpose-built cracking tools, password cracking times can vary dramatically depending on the specific hashing algorithm used and password length/character combination. ... With readily available GPUs and cracking software, attackers can instantly crack numeric passwords of 13 characters or fewer secured by MD5's 128-bit hash; on the other hand, an 11-character password consisting of numbers, uppercase/lowercase characters, and symbols would take 26.5 thousand years. ... When used with long, complex passwords, SHA256 is nearly impenetrable using brute force methods— an 11 character SHA256 hashed password using numbers, upper/lowercase characters, and symbols takes 2052 years to crack using GPUs and cracking software. However, attackers can instantly crack nine character SHA256-hashed passwords consisting of only numeric or lowercase characters.


Sharply rising IT costs have CIOs threading the needle on innovation

“Within two years, it will be virtually impossible to buy a PC, tablet, laptop, or mobile phone without AI,” Lovelock says. “Whether you want it or not, you’re going to get it sold to you.” Vendors have begun to build AI into software as well, he says, and in many cases, charge customers for the additional functionality. IT consulting services will also add AI-based services to their portfolios. ... But the biggest expected price hikes are for cloud computing services, despite years of expectations that cloud prices wouldn’t increase significantly, Lovelock says. “For many years, CIOs were taught that in the cloud, either prices went down, or you got more functionality, and occasionally both, that the economies of scale accrue to the cloud providers and allow for at least stable prices, if not declines or functional expansion,” he says. “It wasn’t until post-COVID in the energy crisis, followed by staff cost increases, when that story turned around.” ... “Generative AI is no longer seen as a one-size-fits-all solution, and this shift is helping both solutions providers and businesses take a more practical approach,” he says. “We don’t see this as a sign of lower expectations but as a move toward responsible and targeted use of generative AI.”


US takes aim at healthcare cybersecurity with proposed HIPAA changes

The major update to the HIPAA security regulations also requires healthcare organizations to strengthen security incident response plans and procedures, carry out annual penetration tests and compliance audits, among other measures. Many of the proposals cover best practice enterprise security guidelines foundational to any mature cybersecurity program. ... Cybersecurity experts praised the shift to a risk-based approach covered by the security rule revamp, while some expressed concerns that the measures might tax the financial resources of smaller clinics and healthcare providers. “The security measures called for in the proposed rule update are proven to be effective and will mitigate many of the risks currently present in the poorly protected environments of many healthcare payers, providers, and brokers,” said Maurice Uenuma, VP & GM for the Americas and security strategist at data security firm Blancco. ... Uenuma added: “The challenge will be to implement these measures consistently at scale.” Trevor Dearing, director of critical infrastructure at enterprise security tools firm Illumio, praised the shift from prevention to resilience and the risk-based approach implicit in the rule changes, which he compared to the EU’s recently introduced DORA rules for financial sector organizations.


Risk resilience: Navigating the risks that board’s can’t ignore in 2025

The geopolitical landscape is more turbulent than ever. Companies will need to prepare for potential shocks like regional conflicts, supply chain disruptions, or even another pandemic. If geopolitical risks feel dizzyingly complex, scenario planning will be a powerful tool in mapping out different political and economic scenarios. By envisioning various outcomes, boards can better understand their vulnerabilities, prepare tailored responses and enhance risk resilience. To prepare for the year ahead, board and management teams should ask questions such as: How exposed are we to geopolitical risks in our supply chain? Are we engaging effectively with local governments in key regions?  ... The risks of 2025 are formidable, but so are the opportunities for those who lead with purpose. With informed leadership and collaboration, we can navigate the complexities of the modern business environment with confidence and resilience. Resilience will be the defining trait of successful boards and businesses in the years ahead. It requires not only addressing known risks but also preparing for the unexpected. By prioritising scenario planning, fostering a culture of transparency, and aligning risk management with strategic goals, boards can navigate uncertainty with confidence.


Freedom from Cyber Threats: An AI-powered Republic on the Rise

Developing a resilient AI-driven cybersecurity infrastructure requires substantial investment. The Indian government’s allocation of over ₹550 crores to AI research demonstrates its commitment to innovation and data security. Collaborations with leading cybersecurity companies exemplify scalable solutions to secure digital ecosystems, prioritising resilience, ethical governance, and comprehensive data protection. Research tools like the Gartner Magic Quadrant also offer reliable and useful insights into the leading companies that offer the best and latest SIEM technology solutions. Upskilling the workforce is equally important. Training programs focused on AI-specific cybersecurity skills are preparing India’s talent pool to tackle future challenges effectively. ... Proactive strategies are essential to counter the evolution of cyber threats. Simulation tools enable organizations to anticipate and neutralise potential vulnerabilities. Now, cybersecurity threats can be intercepted by high-class threat detection SIEM data clouds and autonomous threat sweeps. Advanced threat research, conducted by dedicated labs within organisations, plays a crucial role in uncovering emerging attack vectors and providing actionable insights to pre-empt potential breaches. 


Enterprises are hitting a 'speed limit' in deploying Gen AI - here's why

The regulatory issue, the report states, makes clear "respondents' unease about which use cases will be acceptable, and to what extent their organizations will be held accountable for Gen AI-related problems." ... The latest iteration was conducted in July through September, and received 2,773 responses from "senior leaders in their organizations and included board and C-suite members, and those at the president, vice president, and director level," from 14 countries, including the US, UK, Brazil, Germany, Japan, Singapore, and Australia, and across industries including energy, finance, healthcare, and media and telecom. ... Despite the slow pace, Deloitte's CTO is confident in the continued development, and ultimate deployment, of Gen AI. "GenAI and AI broadly is our reality -- it's not going away," writes Bawa. Gen AI is ultimately like the Internet, cloud computing, and mobile waves that preceded it, he asserts. Those "transformational opportunities weren't uncovered overnight," he says, "but as they became pervasive, they drove significant disruption to business and technology capabilities, and also triggered many new business models, new products and services, new partnerships, and new ways of working and countless other innovations that led to the next wave across industries."


NVMe-oF Substantially Reduces Data Access Latency

NVMe-oF is a network protocol that extends the parallel access and low latency features of Nonvolatile Memory Express (NVMe) protocol across networked storage. Originally designed for local storage and common in direct-attached storage (DAS) architectures, NVMe delivers high-speed data access and low latency by directly interfacing with solid-state disks. NVMe-oF allows these same advantages to be achieved in distributed and clustered environments by enabling external storage to perform as if it were local. ... Storage targets can be dynamically shared among workloads, thus providing composable storage resources that provide flexibility, agility and greater resource efficiency. The adoption of NVMe-oF is evident across industries where high performance, efficiency and low latency at scale are critical. Notable market sectors include: financial services, e-commerce, AI and machine learning, and specialty cloud service providers (CSPs). Legacy VM migration, real-time analytics, high-frequency trading, online transaction processing (OLTP) and the rapid development of cloud native, performance-intensive workloads at scale are use cases that have compelled organizations to modernize their data platforms with NVMe-oF solutions. Its ability to handle massive data flows with efficiency and high-performance makes it indispensable for I/O-intensive workloads.


The crisis of AI’s hidden costs

Let me paint you a picture of what keeps CFOs up at night. Imagine walking into a massive data center where 87% of the computers sit there, humming away, doing nothing. Sounds crazy, right? That’s exactly what’s happening in your cloud environment. If you manage a typical enterprise cloud computing operation, you are wasting money. It’s not rare to see companies spend $1 million monthly on cloud resources, with 75% to 80% of that amount going right out the window. It’s no mystery what this means for your bottom line. ... Smart enterprises aren’t just hoping the problem will disappear; they’re taking action. Here’s my advice: Don’t rely solely on the basic tools offered by your cloud provider; they won’t give you the immediate cost visibility you need. Instead, invest in third-party solutions that provide a clear, up-to-the-minute picture of your resource utilization. Focus on power-hungry GPUs running AI workloads. ... Rather than spinning up more instances, consider rightsizing. Modern instance types offered by public cloud providers can give you more bang for your buck. ... Predictive analytics can help you scale up or down based on demand, ensuring you’re not paying for idle resources. ... Be strategic and look at the bigger picture. Evaluate reserved instances and savings plans to balance cost and performance. 


AI security posture management will be needed before agentic AI takes hold

We’ve run into these issues when most companies shifted their workloads to the cloud. Authentication issues – like the dreaded S3 bucket that had a default public setting and that was the cause of way too many breaches before it was secure by default – became the domain of cloud security posture management (CSPM) tools before they were swallowed up by the CNAPP acronym. Identity and permission issues (or entitlements, if you prefer) became the alphabet soup of CIEM (cloud identity entitlement management), thankfully now also under the umbrella of CNAPP. AI bots will need to be monitored by similar toolsets, but they don’t exist yet. I’ll go out on a limb and suggest SAFAI (pronounced Sah-fy) as an acronym: Security Assessment Frameworks for AI. These would, much like CNAPP tools, embed themselves in agentless or transparent fashion, crawl through your AI bots collecting configuration, authentication and permission issues and highlight the pain points. You’d still need the standard panoply of other tools to protect you, since they sit atop the same infrastructure. And that’s on top of worrying about prompt injection opportunities, which is something you unfortunately have no control over as they are based entirely on the models and how they are used.


Hackers Use Malicious PDFs, pose as USPS in Mobile Phishing Scam

The bad actors make the malicious PDFs look like communications from the USPS that are sent via SMS text messages and use what the researchers called in a report Monday a “never-before-seen means of obfuscation” to help them bypass traditional security controls. They embed the malicious links in the PDF, essentially hiding them from endpoint security solutions. ... The phishing attacks are part of a larger and growing trend of what Zimperium calls “mishing,” an umbrella word for campaigns that use email, text messages, voice calls, or QR codes that exploit such weaknesses as unsafe user behavior and minimal security on many mobile devices to infiltrate corporate networks and steal information. ... “We’re witnessing phishing evolve in real time beyond email into a sophisticated multi-channel threat, with attackers leveraging trusted brands like USPS, Royal Mail, La Poste, Deutsche Post, and Australian Post to exploit limited mobile device security worldwide,” Kowski said. “The discovery of over 20 malicious PDFs and 630 phishing pages targeting organizations across 50+ countries shows how threat actors capitalize on users’ trust in official-looking communications on mobile devices.” He also noted that internal disagreements are hampering corporations’ ability to protect against such attacks.


Daily Tech Digest - January 05, 2025

Phantom data centers: What they are (or aren’t) and why they’re hampering the true promise of AI

Fake data centers represent an urgent bottleneck in scaling data infrastructure to keep up with compute demand. This emerging phenomenon is preventing capital from flowing where it actually needs to. Any enterprise that can help solve this problem — perhaps leveraging AI to solve a problem created by AI — will have a significant edge. ... As utilities struggle to sort fact from fiction, the grid itself becomes a bottleneck. McKinsey recently estimated that global data center demand could reach up to 152 gigawatts by 2030, adding 250 terawatt-hours of new electricity demand. In the U.S., data centers alone could account for 8% of total power demand by 2030, a staggering figure considering how little demand has grown in the last two decades. Yet, the grid is not ready for this influx. Interconnection and transmission issues are rampant, with estimates suggesting the U.S. could run out of power capacity by 2027 to 2029 if alternative solutions aren’t found. Developers are increasingly turning to on-site generation like gas turbines or microgrids to avoid the interconnection bottleneck, but these stopgaps only serve to highlight the grid’s limitations.


Understanding And Preparing For The 7 Levels Of AI Agents

Task-specialized agents excel in somewhat narrow domains, often outperforming humans in specific tasks by collaborating with domain experts to complete well-defined activities. These agents are the backbone of many modern AI applications, from fraud detection algorithms to medical imaging systems. Their origins trace back to the expert systems of the 1970s and 1980s, like MYCIN, a rule-based system for diagnosing infections. ... Context-aware agents distinguish themselves by their ability to handle ambiguity, dynamic scenarios, and synthesize a variety of complex inputs. These agents analyze historical data, real-time streams, and unstructured information to adapt and respond intelligently, even in unpredictable scenarios. ... The idea of self-reflective agents ventures into speculative territory. These systems would be capable of introspection and self-improvement. The concept has roots in philosophical discussions about consciousness, first introduced by Alan Turing in his early work on machine intelligence and later explored by thinkers like David Chalmers. Self-reflective agents would analyze their own decision-making processes and refine their algorithms autonomously, much like a human reflects on past actions to improve future behavior.


The 7 Key Software Testing Principles: Why They Matter and How They Work in Practice

Identifying defects early in the software development lifecycle is critical because the cost and effort to fix issues grow exponentially as development progresses. Early testing not only minimizes these risks but also streamlines the development process by addressing potential problems when they are most manageable and least expensive. This proactive approach saves time, reduces costs, and ensures a smoother path to delivering high-quality software. ... The pesticide paradox suggests that repeatedly running the same set of tests will not uncover new or previously unknown defects. To continue identifying issues effectively, test methodologies must evolve by incorporating new tests, updating existing test cases, or modifying test steps. This ongoing refinement ensures that testing remains relevant and capable of discovering previously hidden problems. ... Test strategies must be tailored to the specific context of the software being tested. The requirements for different types of software—such as a mobile app, a high-transaction e-commerce website, or a business-critical enterprise application—vary significantly. As a result, testing methodologies should be customized to address the unique needs of each type of application, ensuring that testing is both effective and relevant to the software's intended use and environment.


This Year, RISC-V Laptops Really Arrive

DeepComputing is now working in partnership with Framework, a laptop maker founded in 2019 with the mission to “fix consumer electronics,” as it’s put on the company’s website. Framework sells modular, user-repairable laptops that owners can keep indefinitely, upgrading parts (including those that can’t usually be replaced, like the mainboard and display) over time. “The Framework laptop mainboard is a place for board developers to come in and create their own,” says Patel. The company hopes its laptops can accelerate the adoption of open-source hardware by offering a platform where board makers can “deliver system-level solutions,” Patel adds, without the need to design their own laptop in-house. ... The DeepComputing DC-Roma II laptop marked a major milestone for open source computing, and not just because it shipped with Ubuntu installed. It was the first RISC-V laptop to receive widespread media coverage, especially on YouTube, where video reviews of the DC-Roma II  collectively received more than a million views. ... Balaji Baktha, Ventana’s founder and CEO, is adamant that RISC-V chips will go toe-to-toe with x86 and Arm across a variety of products. “There’s nothing that is ISA specific that determines if you can make something high performance, or not,” he says. “It’s the implementation of the microarchitecture that matters.”


The cloud architecture renaissance of 2025

First, get your house in order. The next three to six months should be spent deep-diving into current cloud spending and utilization patterns. I’m talking about actual numbers, not the sanitized versions you show executives. Map out your AI and machine learning (ML) workload projections because, trust me, they will explode beyond your current estimates. While you’re at it, identify which workloads in your public cloud deployments are bleeding money—you’ll be shocked at what you find. Next, develop a workload placement strategy that makes sense. Consider data gravity, performance requirements, and regulatory constraints. This isn’t about following the latest trend; it’s about making decisions that align with business realities. Create explicit ROI models for your hybrid and private cloud investments. Now, let’s talk about the technical architecture. The organizational piece is critical, and most enterprises get it wrong. Establish a Cloud Economics Office that combines infrastructure specialists, data scientists, financial analysts, and security experts. This is not just another IT team; it is a business function that must drive real value. Investment priorities need to shift, too. Focus on automated orchestration tools, cloud management platforms, and data fabric solutions.


How datacenters use water and why kicking the habit is nearly impossible

While dry coolers and chillers may not consume water onsite, they aren't without compromise. These technologies consume substantially more power from the local grid and potentially result in higher indirect water consumption. According to the US Energy Information Administration, the US sources roughly 89 percent of its power from natural gas, nuclear, and coal plants. Many of these plants employ steam turbines to generate power, which consumes a lot of water in the process. Ironically, while evaporative coolers are why datacenters consume so much water onsite, the same technology is commonly employed to reduce the amount of water lost to steam. Even still the amount of water consumed through energy generation far exceeds that of modern datacenters. ... Understanding that datacenters are, with few exceptions, always going to use some amount of water, there are still plenty of ways operators are looking to reduce direct and indirect consumption. One of the most obvious is matching water flow rates to facility load and utilizing free cooling wherever possible. Using a combination of sensors and software automation to monitor pumps and filters at facilities utilizing evaporative cooling, Sharp says Digital Realty has observed a 15 percent reduction in overall water usage.


Data centres in space: they’re a brilliant idea, but a herculean challenge

Data centres beyond Earth’s atmosphere would have access to continuous solar energy and could be naturally cooled by the vacuum of space. Away from terrestrial issues like planning permission, such facilities could be rapidly deployed and expanded as the demand for more data keeps increasing. It may sound like something from a sci-fi novel, but this concept has been gaining more attention as space technology has advanced and the need for sustainable and scalable data centres has become apparent. ... Space weather, such as solar flares could disrupt operations, while collisions with debris are a major worry – rather offsetting the fact that space-based data centres don’t have to fear earthquakes or floods. Advanced shielding could protect against things like radiation and micrometeoroids, but it will probably only do so much – particularly as Earth’s orbit becomes ever more crowded. To fix damaged facilities, advances in robotics and automation will of course help, but remote maintenance may not be able to address all issues. Sending repair crews remains a very complex and costly affair, and though the falling cost of space launches will again help here, it is still likely to be a huge burden for a few decades to come. In addition, disposing of data centre waste takes on a whole new level of complexity off-planet.


India’s Digital Data Protection Framework: Safety, Trust and Resilience

The draft rules cover various key areas, including the responsibilities of Data Fiduciaries, the role of Consent Managers, and protocols for State Data Processing, particularly in contexts like the distribution of subsidies and public services. They also detail measures for Breach Notifications, mechanisms for individuals to exercise their Data Rights, and special provisions for processing data related to children and persons with disabilities. The Data Protection Board, central to the enforcement of the Act, is set to function as a fully digital office, streamlining its operations and improving accessibility. Additionally, the rules outline procedures for appealing decisions through the Appellate Tribunal, ensuring accountability at every stage. One of the defining aspects of the draft rules is their alignment with the SARAL framework, which emphasises simplicity, clarity, and contextual definitions. To aid public understanding, illustrative examples and explanatory notes have been included, making the document accessible to stakeholders across industries, government bodies, and civil society. Both the draft rules and the accompanying explanatory notes are available on the MeitY website for public review and consultation. While legislative measures are being formalised, the government has swiftly addressed recent data breaches.


The Rise of AI Agents and Data-Driven Decisions

“In 2025, AI agents will take generative AI to the next level by moving beyond content creation to active participation in daily business operations,” he says. “These agents, capable of partial or full autonomy, will handle tasks like scheduling, lead qualification, and customer follow-ups, seamlessly integrating into workflows. Rather than replacing generative AI, they will enhance its utility by transforming insights into immediate, actionable outcomes.” Kawasaki emphasizes the developer-centric benefits as well. “AI agents will become faster and easier to build as low-code and no-code platforms mature, reducing the complexity of creating intelligent, AI-powered scenarios,” he says. ... “AI will play a transformative role in the fortification of cyber security by addressing challenges like scalability, prioritization and speed to detection. Unfortunately, cyber threats have become commonplace on the network and attackers are becoming more sophisticated in their methods – many times operating at a threshold that is very difficult to detect. As a result, organizations that fail to integrate an AI capability into their defense strategy risk being exposed to business-altering vulnerabilities. AI’s ability to monitor vast networks for imperceptible anomalies allows organizations to prioritize the most critical threats in real-time.”


New HIPAA Cybersecurity Rules Pull No Punches

Since the beginning, HIPAA has always been the best, yet insufficient, regulation dictating cybersecurity for the healthcare industry. "[There's] a history of the focus being in the wrong place because of the way HIPAA was laid out in the mid-1990s," says Errol Weiss, chief information security officer (CISO) of the Healthcare Information Sharing and Analysis Center (Health-ISAC). ... The newly proposed Security Rule aims to fix things up, with a laundry list of new requirements that touch on patch management, access controls, multifactor authentication (MFA), encryption, backup and recovery, incident reporting, risk assessments, compliance audits, and more. As Lawrence Pingree, vice president at Dispersive, acknowledges, "People have a love-hate relationship with regulations. But there's a lot of good that comes from HIPAA becoming a lot more prescriptive. Whenever you are more specific about the security controls that they must apply, the better off you are." ... Joseph J. Lazzarotti, principal at Jackson Lewis P.C., says provision 164.306 allowed for the kind of flexibility businesses always ask for: "That we're not expecting the same thing from every solo practitioner on Main Street in the Midwest versus the large hospital on the East Coast. There are obviously going to be different expectations for compliance."



Quote for the day:

“Do the best you can until you know better. Then when you know better, do better.” -- Maya Angelou

Daily Tech Digest - August 05, 2024

Faceoff: Auditable AI Versus the AI Blackbox Problem

“The notion of auditable AI extends beyond the principles of responsible AI, which focuses on making AI systems robust, explainable, ethical, and efficient. While these principles are essential, auditable AI goes a step further by providing the necessary documentation and records to facilitate regulatory reviews and build confidence among stakeholders, including customers, partners, and the general public,” says Adnan Masood ... “There are two sides of auditing: the training data side, and the output side. The training data side includes where the data came from, the rights to use it, the outcomes, and whether the results can be traced back to show reasoning and correctness,” says Kevin Marcus. “The output side is trickier. Some algorithms, such as neural networks, are not explainable, and it is difficult to determine why a result is being produced. Other algorithms such as tree structures enable very clear traceability to show how a result is being produced,” Marcus adds. ... Developing explainable AI remains the holy grail and many an AI team is on a quest to find it. Until then, several efforts are underway to develop various ways to audit AI in order to have a stronger grip over its behavior and performance. 


A developer’s guide to the headless data architecture

We call it a “headless” data architecture because of its similarity to a “headless server,” where you have to use your own monitor and keyboard to log in. If you want to process or query your data in a headless data architecture, you will have to bring your own processing or querying “head” and plug it into the data — for example, Trino, Presto, Apache Flink, or Apache Spark. A headless data architecture can encompass multiple data formats, with data streams and tables as the two most common. Streams provide low-latency access to incremental data, while tables provide efficient bulk-query capabilities. Together, they give you the flexibility to choose the format that is most suitable for your use cases, whether it’s operational, analytical, or somewhere in between. ... Many businesses today are building their own headless data architectures, even if they’re not quite calling it that yet, though using cloud services tends to be the easiest and most popular way to get started. If you’re building your own headless data architecture, it’s important to first create well-organized and schematized data streams, before populating them into Apache Iceberg tables.


The Hidden Costs of the Cloud Skills Gap

Properly managing and scaling cloud resources requires expertise in load balancing, auto-scaling, and cost optimization. Without these skills, companies may face inefficiencies, either by over-provisioning or under-utilizing resources. Inexperienced or overstretched staff might struggle with performance optimization, resulting in slower applications and services, which can negatively impact user satisfaction and harm the company's reputation. ... Employees lacking the necessary skills to fully leverage cloud technologies may be less likely to propose innovative solutions or improvements, potentially leading to a lack of new product development and stagnation in business growth. The cloud presents abundant opportunities for innovation, including AI, machine learning, and advanced data analytics. Companies without the expertise to implement these technologies risk missing out on significant competitive advantages and exciting new discoveries. The bottom line is that skilled professionals often drive the adoption of new technologies because they have the knowledge to experiment in the field.


Architectural Retrospectives: The Key to Getting Better at Architecting

The traditional architectural review, especially if conducted by outside parties, often turns into a blame-assignment exercise. The whole point of regular architectural reviews in the MVA approach is to learn from experience so that catastrophic failures never occur. ... The mechanics of running an architectural retrospective session are identical to those of running a Sprint Retrospective in Scrum. In fact, an architectural focus can be added to a more general-purpose retrospective to avoid creating yet another meeting, so long as all the participants are involved in making architectural decisions. This can also be an opportunity to demonstrate that anyone can make an architectural decision, not only the "architects." ... Many teams skip retrospectives because they don’t like to confront their shortcomings, Architectural retrospectives are even more challenging because they examine not just the way the team works, but the way the team makes decisions. But architectural retros have great pay-offs: they can uncover unspoken assumptions and hidden biases that prevent the team from making better decisions. If you retrospect on the way that you create your architecture, you will get better at architecting.


Design flaw has Microsoft Authenticator overwriting MFA accounts, locking users out

Microsoft confirmed the issue but said it was a feature not a bug, and that it was the fault of users or companies that use the app for authentication. Microsoft issued two written statements to CSO Online but declined an interview. Its first statement read: “We can confirm that our authenticator app is functioning as intended. When users scan a QR code, they will receive a message prompt that asks for confirmation before proceeding with any action that might overwrite their account settings. This ensures that users are fully aware of the changes they are making.” One problem with that first statement is that it does not correctly reflect what the message says. The message says: “This action will overwrite existing security information for your account. To prevent being locked out of your account, continue only if you initiated this action from a trusted source.” The first sentence of the warning window is correct, in that the action will indeed overwrite the account. But the second sentence incorrectly tells the user to proceed as long as two conditions are met: that the user initiated the action; and that it is a trusted source.


Automation Resilience: The Hidden Lesson of the CrowdStrike Debacle

Automated updates are nothing new, of course. Antivirus software has included such automation since the early days of the Web, and our computers are all safer for it. Today, such updates are commonplace – on computers, handheld devices, and in the cloud. Such automations, however, aren’t intelligent. They generally perform basic checks to ensure that they apply the update correctly. But they don’t check to see if the update performs properly after deployment, and they certainly have no way of rolling back a problematic update. If the CrowdStrike automated update process had checked to see if the update worked properly and rolled it back once it had discovered the problem, then we wouldn’t be where we are today. ... The good news: there is a technology that has been getting a lot of press recently that just might fit the bill: intelligent agents. Intelligent agents are AI-driven programs that work and learn autonomously, doing their good deeds independently of other software in their environment. As with other AI applications, intelligent agents learn as they go. Humans establish success and failure conditions for the agents and then feed back their results into their models so that they learn how to achieve successes and avoid failures.


Is HIPAA enough to protect patient privacy in the digital era?

HIPAA requires covered entities to establish strong data privacy policies, but it doesn’t regulate cybersecurity standards. HIPAA was deliberately designed to be tech agnostic, on the basis that this would keep it relevant despite frequent technology changes. But this could be a glaring omission. For example, Change Healthcare, a medical insurance claims clearinghouse, experienced a data breach when a hacker used stolen credentials to enter the network. If Change had implemented multi-factor authentication (MFA), a basic cybersecurity measure, the breach might not have taken place. But MFA isn’t specified in the HIPAA Security Rule, which was passed 20 years ago. Cybersecurity in the healthcare industry falls through the cracks of other regulations. The CISA update in early 2024 requires companies in critical infrastructure industries to report cyber incidents within 72 hours of discovery. ... “Crucially, there are many third-parties in the healthcare ecosystem that our members contract with who would not be considered ‘covered entities’ under this proposal, and therefore, would not be obligated to share or disclose that there had been a substantial cyber incident – or any cyber incident at all,” warns Russell Branzell, president and CEO of CHIME.


The downtime dilemma: Why organizations hesitate to switch IT infrastructure providers

Making a switch is not always an easy decision. So, how can a business be sure it’s doing the right thing? There are four boxes that a business should look for its IT infrastructure provider to tick before contemplating a move. Firstly, is the provider there when needed? Reliable round the clock customer support is crucial for addressing any issues that arise before, during, and after a switch. For businesses with small IT departments or limited resources, this external support offers reliable infrastructure management without needing an extensive in-house team. Next, does the provider offer high uptime guarantees and Service Level Agreements (SLAs) outlining compensation for downtime? By prioritizing service providers with Uptime Institute’s tier 4 classification, businesses are opting for a partner that’s certified as fully fault-tolerant, highly resilient, and guaranteeing an uptime of 99.9 percent. This protects the business’ crucial IT systems, keeping them operational despite disruptive activity such as a cyberattack, failing components, or unexpected outages. 


Inside CIOs’ response to the CrowdStrike outage — and the lessons they learned

The first thing Alli did was gather the incident response team to assess the situation and establish the company’s immediate response plan. “We had to ensure that we could maintain business continuity while we addressed the implications of the outage,’’ Alli says. Communication was vital and Alli kept leadership and stakeholders informed about the situation and the steps IT was taking with regular updates. “It’s easy to panic in these situations, but we focused on being transparent and calm, which helped to keep the team grounded,’’ Alli says. Additionally, “The lack of access to critical security insights put us at risk temporarily, but more importantly, it highlighted vulnerabilities in our overall security posture. We had to quickly shift some of our security protocols and rely on other measures, which was a reminder of the importance of having a robust backup plan and redundancies in place,’’ Alli says. Mainiero agrees, saying that in this type of situation, “you have to take on a persona — if you’re panicked, your teams are going to panic.” He says that training has taught him never to raise his voice.


SASE: This Time It’s Personal

Working patterns are changing fast. Millennials and GenZs – the first true digital generation – no longer expect to go to the same place every day. Just as the web broke the link between bricks and mortar and shopping, we are now seeing the disintermediation of the workplace, which is anywhere and everywhere. The trend was accelerated by the pandemic, but it’s a mistake to believe that the pandemic created hybrid working. So, while SASE makes the right assumptions about the need to integrate networking and security, it doesn't go far enough. The networking and security stack is still office-bound and centralized. If you were designing this from the ground up, you wouldn't start from here. A more radical approach, what we call personal SASE, is to left-shift the networking and security stack all the way to the user edge. Think of it like the transition from the mainframe to the minicomputer to the PC in the early 1980s, a rapid migration of compute power to the end user. Personal SASE involves a similar architectural shift with commensurate productivity gains for the modern hybrid workforce, who expect but rarely get the same level of network performance and seamless security that they currently experience when they step into the office.



Quote for the day:

"If you really want the key to success, start by doing the opposite of what everyone else is doing." -- Brad Szollose

Daily Tech Digest - May 22, 2024

Guide to Kubernetes Security Posture Management (KSPM)

Bad security posture impacts your ability to respond to new and emerging threats because of extra “strain” on your security capabilities caused by misconfigurations, gaps in tooling, or inadequate training. ... GitOps manages all cluster changes via Configuration as Code (CaC) in Git, eliminating manual cluster modifications. This approach aligns with the Principle of Least Privilege and offers benefits beyond security. GitOps ensures deployment predictability, stability and admin awareness of the cluster’s state, preventing configuration drift and maintaining consistency across test and production clusters. Additionally, it reduces the number of users with write access, enhancing security. ... Human log analysis is crucial for retrospectively reviewing security incidents. However, real-time monitoring and correlation are essential for detecting incidents initially. While manual methods like SIEM solutions with dashboards and alerts can be effective, they require significant time and effort to extract relevant data. 


Where’s the ROI for AI? CIOs struggle to find it

The AI market is still developing, and some companies are adopting the technology without a specific use case in mind, he adds. Kane has seen companies roll out Microsoft Copilot, for example, without any employee training about its uses. ... “I have found very few companies who have found ROI with AI at all thus far,” he adds. “Most companies are simply playing with the novelty of AI still.” The concern about calculating the ROI also rings true to Stuart King, CTO of cybersecurity consulting firm AnzenSage and developer of an AI-powered risk assessment tool for industrial facilities. With the recent red-hot hype over AI, many IT leaders are adopting the technology before they know what to do with it, he says. “I think back to the first discussions that we had within the organizations that are working with, and it was a case of, ‘Here’s this great new thing that we can use now, let’s go out and find a use for it,’” he says. “What you really want to be doing is finding a problem to solve with it first.” As a developer who has integrated AI into his own software, King is not an AI skeptic. 


100 Groups Urge Feds to Put UHG on Hook for Breach Notices

Some experts advise HIPAA-regulated entities that are likely affected by a Change Healthcare breach to take precautionary measures now to prepare for their potential notification duties involving a compromise of their patients' PHI. ... HIPAA-regulated Change Healthcare customers also have an obligation under HIPAA to perform "reasonable diligence" to investigate and obtain information about the incident to determine whether the incident triggers notice obligations to their patients or members, said attorney Sara Goldstein of law firm BakerHostetler. Reasonable diligence includes Change Healthcare customers frequently checking UHG and Optum's websites for updates on the restoration and data analysis process, contacting their Change Healthcare account representative on a regular basis to see if there are any updates specific to their organization, and engaging outside privacy counsel to submit a request for information directly to UnitedHealth Group to obtain further information about the incident, Goldstein said.


‘Innovation Theater’ in Banking Gives Way to a More Realistic and Productive Function

The conservative approach many institutions are taking to GenAI reflects that reality. Buy Now, Pay Later meanwhile makes a great example of how exciting new innovations can unexpectedly reveal a dark side. ... In many institutions, innovation has become less about pure invention and more about applying what’s out there already in new ways and combinations to solve common problems. Doing so doesn’t necessarily require geniuses, but you do need highly specialized “plumbers” who can link together multiple technologies in smart ways. Even the regulatory view has evolved. There was a time when federal regulators held open doors to innovation, even to the extent of offering “sandboxes” to let innovations sprout without weighing them down initially with compliance burdens. But the Consumer Financial Protection Bureau, under the Biden administration, did away with its sandbox early on. Washington today walks a more cautious line on innovation, and that line could veer. The bottom line? Innovators who take their jobs, and the impact of their jobs, seriously, realize that banking innovation must grow up.


AI glasses + multimodal AI = a massive new industry

Both OpenAI and Google demos clearly reveal a future where, thanks to the video mode in multimodal AI, we’ll be able to show AI something, or a room full of somethings, and engage with a chatbot to help us know, process, remember or understand. It would be all very natural, except for one awkward element. All this holding and waving around of phones to show it what we want it so “see” is completely unnatural. Obviously — obviously! — video-enabled multimodal AI is headed for face computers, a.k.a. AI glasses. And, in fact, one of the most intriguing elements of the Google demo was that during a video demonstration, the demonstrator asked Astra-enhanced Gemini if it remembered where her glasses were, and it directed her back to a table, where she picked up the glasses and put them on. At that point, the glasses — which were prototype AI glasses — seamlessly took over the chat session from the phone (the whole thing was surely still running on the phone, with the glasses providing the camera, microphones and so on).
 

Technological complexity drives new wave of identity risks

The concept zero standing privilege (ZSP) requires that a user only be granted the minimum levels of access and privilege needed to complete a task, and only for a limited amount of time. Should an attacker gain entry to a user’s account, ZSP ensures there is far less potential for attackers to access sensitive data and systems. The study found that 93% of security leaders believe ZSP is effective at reducing access risks within their organization. Additionally, 91% reported that ZSP is being enforced across at least some of their company’s systems. As security leaders face greater complexity across their organizations’ systems and escalating attacks from adversaries, it’s no surprise that risk reduction was cited as respondents’ top priority for identity and access management (55%). This was followed by improving team productivity (50%) and automating processes (47%). Interestingly, improving user experience was cited as the top priority among respondents who experienced multiple instances of attacks or breaches due to improper access in the last year.


The Legal Issues to Consider When Adopting AI

Different types of data bring different issues of consent and liability. For example, consider whether your data is personally identifiable information, synthetic content (typically generated by another AI system), or someone else’s intellectual property. Data minimization—using only what you need—is a good principle to apply at this stage. Pay careful attention to how you obtained the data. OpenAI has been sued for scraping personal data to train its algorithms. And, as explained below, data-scraping can raise questions of copyright infringement. ... Companies also need to consider the potential forinadvertent leakage of confidential and trade-secret information by an AI product. If allowing employees to internally use technologies such as ChatGPT (for text) and Github Copilot (for code generation), companies should note that such generative AI tools often take user prompts and outputs as training data to further improve their models. Luckily, generative AI companies typically offer more secure services and the ability to opt out of model training.


How innovative power sourcing can propel data centers toward sustainability

The increasing adoption of Generative AI technologies over the past few years has placed unprecedented energy demands on data centers, coinciding with a global energy emergency exacerbated by geopolitical crises. Electricity prices have since reached record highs in certain markets, while oil prices soared to their highest level in over 15 years. Volatile energy markets have awakened a need in the general population to become more flexible in their energy use. At the same time, the trends present an opportunity for the data center sector to get ahead of the game. By becoming managers of energy, as opposed to just consumers, market players can find more efficient and cost-effective ways to source power. Innovative renewable options present a highly attractive avenue in this regard. As a result, data center providers are working more collaboratively with the energy sector for solutions. And for them, it’s increasingly likely that optimizing efficiency won’t be just about being close to the grid, but also about being close to the power-generation site – or even generating and storing power on-site.


Google DeepMind Introduces the Frontier Safety Framework

Existing protocols for AI safety focus on mitigating risks from existing AI systems. Some of these methods include alignment research, which trains models to act within human values, and implementing responsible AI practices to manage immediate threats. However, these approaches are mainly reactive and address present-day risks, without accounting for the potential future risks from more advanced AI capabilities. In contrast, the Frontier Safety Framework is a proactive set of protocols designed to identify and mitigate future risks from advanced AI models. The framework is exploratory and intended to evolve as more is learned about AI risks and evaluations. It focuses on severe risks resulting from powerful capabilities at the model level, such as exceptional agency or sophisticated cyber capabilities. The Framework aims to align with existing research and Google’s suite of AI responsibility and safety practices, providing a comprehensive approach to preventing any potential threats.


Proof-of-concept quantum repeaters bring quantum networks a big step closer

There are two main near-term use cases for quantum networks. The first use case is to transmit encryption keys. The idea is that public key encryption – the type currently used to secure Internet traffic – could soon be broken by quantum computers. Symmetrical encryption – where the same key is used to both encrypt and decrypt messages – is more future proof, but you need a way to get that key to the other party. ... Today, however, the encryption we currently have is good enough, and there’s no immediate need for companies to look for secure quantum networks. Plus, there’s progress already being made on creating quantum-proof encryption algorithms. The other use for quantum networks is to connect quantum computers. Since quantum networks transmit entangled photons, the computers so connected would also be entangled, theoretically allowing for the creation of clustered quantum computers that act as a single machine. “There are ideas for how to take quantum repeaters and parallelize them to provide very high connectivity between quantum computers,” says Oskar Painter, director of quantum hardware at AWS. 



Quote for the day:

"Many of life’s failures are people who did not realize how close they were to success when they gave up." -- Thomas Edison