Daily Tech Digest - March 25, 2024

Two ways to improve GDPR enforcement

Centralised enforcement would certainly add efficiency and consistency to the enforcement process. However, implementation could take years, and even once it’s in place, there’s a risk that member states may disagree about enforcement decisions because one member state could take issue with rulings made by the central enforcement agency. The other foreseeable approach is for the EU to stick with its current decentralised approach to GDPR enforcement, but to invest in measures that would make enforcement more consistent and efficient. ... Developing clearer guidelines about GDPR interpretation would help, too. As a principles-based framework, the GDPR can be overwhelming to interpret, making it challenging for businesses to comply and for enforcement authorities in various countries to determine when a violation has taken place. Centralised interpretation guidance in the form of clarifications about complex GDPR requirements or examples of successful compliance would help ensure more consistent and efficient enforcement of the GPDR, even without a centralised enforcement agency.


How to get your CFO to buy into a better model for IT funding

To ensure persistent teams stay within budget, and thereby reduce risk, it’s crucial that executives understand the fundamental agile principles related to flexible scope and fixed budget. Sometimes, management needs to make a change in direction, and persistent teams allow for this. By using data insights from the quarterly business performance report, the CFO is made aware of situations where the organisation is not tracking towards goals. The executive is then empowered to reprioritise, while still focusing on the ‘why’ or outcome to be delivered. They can change persistent teams’ focus by working with them to swap one initiative for another — rather than asking for additional funding. Making trade-offs means they need to prioritise wisely, as there is a fixed budget to work within. “When there is a change in direction, executives are empowered to make trade-offs to deliver on their needs. It is no longer an ‘ask’ of technology,” says Hubbard, regarding Rest’s use of an agile approach in conjunction with persistent funding. We set up a persistent pilot team at Rest in 2023 to test out the concept. About three months into the six-month pilot, the team uncovered that one of the initiatives wasn’t technically feasible at this time.


7 Tips for Managing Cross-Border Data Transfers

Partners are great for business, but they can misunderstand and make mistakes, too. Their errors can cost your organization as much as its own mistakes can. Take steps to ensure all third parties you work with comply as well. “Increasingly, companies that want to mature and manage their cross-border data transfers are putting in place three-part vendor risk programs that include pre-contract assessments, contractual safeguards model privacy and data protection provisions and data processing addendums (DPAs), and post-contract audits,” says Jim Koenig, a partner at Troutman Pepper and co-chair of its privacy and cyber practice group. The first ensures third parties meet your security requirements and provides an inventory of data transfers. The second -- contractual safeguards model privacy and data protection provisions and DPAs -- “define the specific uses and restrictions on secondary uses, including AI algorithm training, and compliance requirements,” Koenig says. And the last, post-contract audits, “assesses the recipient company’s compliance with the applicable data transfer laws, such as EU GDPR, Saudia Arabia, China’s PIPL and others, and specific contract requirements,” he says.


Getting Ahead of Shadow Generative AI+

Generative AI should help you differentiate what your company does. However, using public LLMs alone will not deliver this, and you will sound the same as everyone else. Companies can make their generative AI strategies more effective and tailored for them and for employees by bringing their own data to the table using retrieval augmented generation, or RAG. RAG takes your own data, gets it ready for use with generative AI, and then passes this data as context into the LLM when your employee asks for a response. RAG is part of solving problems like hallucinations, and it also makes results more relevant for your organization and your customers, rather than getting similar results to other companies that are asking for the same kinds of questions. ... To implement this, you will have to combine various tools from vector data stores and AI integrations to build a RAG stack that makes it easier and faster to get started. Delivering this quickly will help you prevent some of those “off the books” deployments that teams might try to do for themselves while they wait for central IT. 


The state of ransomware: Faster, smarter, and meaner

The pace of innovation on the part of ransomware criminal groups has hit a new high. “In the past two years, we have witnessed a hockey stick curve in the rate of evolution in the complexity, speed, sophistication, and aggressiveness of these crimes,” says John Anthony Smith, CSO and founder of cybersecurity firm Conversant Group. ... “They have combined innovative tactics with complex methods to compromise the enterprise, take it to its knees, and leave it little room to negotiate,” Smith says. One sign of this is that dwell time — the length of time before the first entry to data exfiltration, encryption, backup destruction, or ransom demand — has dramatically shortened. “While it used to take weeks, threat actors are now often completing attacks in as little as four to 48 hours,” says Smith. Another new tactic is that attackers are evading multifactor authentication by using SIM swapping attacks and token capture or taking advantage of MFA fatigue on the part of employees. Once a user authenticates themselves, tokens are used to authenticate further requests so that they don’t have to keep going through the authentication. 


Companies are about to waste billions on AI — here’s how not to become one of them

As you think about saying yes to that next AI project, look at the cost of the needed resources, today and over time, to sustain that project. Ten hours of work from your data science team often has 5X the engineering, DevOps, QA, product and SysOps time buried underneath. Companies are littered with fragments of projects that were once a good idea but lacked ongoing investment to sustain them. Saying no to an AI initiative is hard today, but too frequent yes’ often come at the cost of fully funding the few things worth supporting tomorrow. Another dimension to cost is the increasing marginal cost that AI drives. These large models are costly to train, run and maintain. ... The simplest bets are the ones that better the business you are already in. The old BASF commercial comes to mind: “We don’t make the things you buy, we make the things you buy better.” If the application of AI provides you momentum in the products you already make, that bet is the easiest to make and scale. The second easiest bets are the ones that let you move up and down the value chain or laterally expand to other sectors.


Securing Modern Banking Applications – Do’s and Don’ts

The consumer also plays a pivotal role in the security of their mobile banking. As the device user, consumers and/or employees need to beware of banking applications that ask for tons of accessibility permissions. Granting accessibility permissions without closely looking at what they are requesting can be risky because these permissions can give apps broad control over a device’s functionalities. Banking trojans will often ask for and then exploit accessibility features to automate transactions, capture sensitive data (such as passwords) or overlay fake login screens on legitimate banking apps. Just because the app is legit, consumers should still proceed with caution, knowing that trojans will often use this “preconceived trust” as a launching pad for their destructive attacks. Consumers should also avoid downloading banking apps from unvetted sources, such as third-party app stores that lack the rigorous security controls that actual Apple or Android stores have. Lastly, beware of phishing emails, URLs or texts that look legitimate. Threat actors will often reverse-engineer banking apps to steal logos and other icons to imitate the actual app.


8 cybersecurity predictions shaping the future of cyber defense

By 2028, the adoption of GenAI will collapse the skills gap, removing the need for specialized education from 50% of entry-level cybersecurity positions. GenAI augments will change how organizations hire and teach cybersecurity workers looking for the right aptitude, as much as the right education. Mainstream platforms already offer conversational augments, but will evolve. Gartner recommends cybersecurity teams focus on internal use cases that support users as they work; coordinate with HR partners; and identify adjacent talent for more critical cybersecurity roles. ... By 2026, enterprises combining GenAI with an integrated platforms-based architecture in security behavior and culture programs (SBCP) will experience 40% fewer employee-driven cybersecurity incidents. Organizations are increasingly focused on personalized engagement as an essential component of an effective SBCP. GenAI has the potential to generate hyperpersonalized content and training materials that take into context an employee’s unique attributes. According to Gartner, this will increase the likelihood of employees adopting more secure behaviors in their day-to-day work, resulting in fewer cybersecurity incidents.


Data Security Posture Management in the Education Sector: What You Need to Know

The first and perhaps most crucial step is identifying where all instances of student data reside within your institution. With a best-of-breed DSPM solution, advanced machine learning (ML) and AI can autonomously scan and categorize student data, regardless of where it’s stored (including in structured and unstructured data repositories, email/messaging applications, or cloud or on-premises storage), including its semantic context. It can identify the data, learn its usage patterns, and determine if it’s at risk. This thorough discovery and identification process is also especially important for educational institutions aiming for FERPA compliance. ... The ability to identify and classify sensitive student data puts institutions in a great place, but once identified, any vulnerabilities and risks found must be remediated. Leveraging deep learning, DSPM solutions can compare each data element with baseline security practices used by similar data to detect risk -- even without relying on rules and policies. Even better is to address these access risks in real time -- whether that means remediating access control issues, disabling sensitive file sharing, or blocking an attachment in a messaging platform.


API Security Best Practices That CTOs Can Action Today

The basic function of APIs is to facilitate the exchange of data from one system to another, a process that inherently multiplies potential security risks. The current pace of innovation, with new services, features, and operations being rolled out almost daily, means that several foundational security practices are often overlooked. This oversight can dramatically decrease an organization’s security posture because APIs, by their very design, open up access to data and systems – often beyond the direct control of the organization. This aspect of APIs – the “link” to external entities – is a double-edged sword. While it enables unprecedented levels of interconnectivity and functionality between applications, it also demands that security controls be as robust and comprehensive as those applied to internal access management. However, therein lies the problem: while developers and IT professionals are adept at quickly setting up APIs in the interests of enhancing their services and operations, they often don’t apply the same security standards as they would to strictly internal operations. 



Quote for the day:

"The more I help others to succeed, the more I succeed." -- Ray Croc

Daily Tech Digest - March 24, 2024

How AI is changing scientific discovery

While the spread of misinformation is one of the many ways that AI is changing science, an even broader and more positive application of this technology is self-driving labs (SDL). In an SDL, AI selects new material formulations aided by robotic arms to synthesise new materials. While this technology is currently limited to discovering new materials, it relieves researchers of having to grapple with trillions of possible formulations. This greatly improves the labour productivity in science, saving time and money and allowing researchers more time to improve creative aspects such as experimental design. In fact, in April 2023, UofT was awarded Canada’s largest-ever research grant of $200 million in research funding towards Acceleration Consortium—a UofT-based network that aims to accelerate materials discovery through AI and robotics. Through this funding, autonomous labs are being built at UofT, such as in the Leslie Dan Faculty of Pharmacy where AI, automation, and advanced computing are used to iteratively test and develop material combinations for new drug formulations.


Need for upskilling and cross-skilling amongst cybersecurity professionals

The urgency of upskilling and cross-skilling is further underscored by the shortage of skilled cybersecurity professionals. In the post-pandemic years, the digital transformation across industries has resulted in a massive demand for cybersecurity professionals with the right skills. As per estimates, there are over 5.4 million cybersecurity professionals currently globally, and there are nearly 4 million job openings in the field. In fact, 67% of cybersecurity professionals have reported that their organizations face a shortage of adequately skilled personnel to secure their digital infrastructure. Even in India, where digital infrastructure has grown by leaps and bounds in the last two years, there is a major need to train more people on cutting-edge cybersecurity practices. As organizations struggle to fill critical cybersecurity roles, existing professionals must take the initiative to expand their skill sets. Training programs, certifications, and continuous learning opportunities can empower cybersecurity experts to bridge the gap between their current knowledge and the ever-evolving threat landscape.


Data Sovereignty and Digital Governance in APAC: What’s Next?

There is increasing diversity of data sources, datasets, workloads and the permeance of cloud and multi-cloud, and edge computing throughout the data management cycle. This coupled with more interconnected and decentralised organisations with embedded solutions and loosely coupled architectures has necessitated breaking down data silos while maintaining data and metadata quality. This hence can result in breaking down of geopolitical borders, due to the fact that data can get generated, processed, localised, stored, transferred, transformed and accessed across different countries. This necessitates knowledge and compliance to privacy and security laws of all applicable countries, especially in the cloud, edge, co-location infrastructure and on-premise ecosystems. Moreover, there are potentially complex situations and conflicts especially in case of variances in Data Protection Acts across countries such as data flows between EU-GDPR and the US Privacy laws. There could be additional different scenarios at federal or state levels. Similar considerations must be planned and executed in case of cross-border data flows, backup and disaster recovery.


Boards of directors: The final cybersecurity defense for industrials

First and foremost, board members provide oversight and guidance. They should ensure that executives and their teams set a high standard for cybersecurity. They should then follow through on achieving them by ensuring that security is embedded by design in digital products and that technology teams share responsibility for cybersecurity. The board is the last line of defense in ensuring such initiatives get planned and funded. Boards also look at risk prioritization and trade-offs. They are often intimidated when it comes to determining risk levels and giving fact-based inputs into risk trade-offs. In addition, the vocabulary and reporting capabilities used by security teams with their boards are often inconsistent and technical. As a result, it can be overwhelming for board members who want to contribute meaningfully to reducing cybersecurity risk but are not quite sure how. A board member does not need to have specific knowledge about cybersecurity to add value. Instead, they need to test and ask the cyber team about potential business impacts. This means the cyber team should equate cyber issues and controls with business risks.


Compliance meets AI: A banking love story

Most financial institutions are at preliminary stages in evaluating opportunities to use generative AI in their operations. Some of the areas where we are seeing the anticipated use of LLMs are in customer services. Large language models can interact with a bank’s customers in very natural conversations. Depending on the data that the bank trains the LLM on, the chat bots can answer questions about customer accounts and even provide recommended product offerings and investment advice. Several large banks are working with internal LLM models to capture call center notes, organize information for investment advisors and organize other product data for customer service reps, with plans to roll out to more customer-facing uses as extensive testing addresses potential risks. Banks are also assessing opportunities to improve internal operations. Generative AI capabilities enable new ways to analyze data. One practical use case for most organizations is to train LLMs on all the pockets of organizational information that employees need to access to do their jobs. 


Navigating fraud and AML challenges with innovation solutions in a new financial frontier

In the intricate domain of Anti-Money Laundering (AML) compliance, the quality of data plays a pivotal role. Wolters Kluwer’s CCH iFirm AML module underscores this by ensuring access to leading credit bureaus and governmental data sets. The accuracy, completeness, timeliness, consistency, and relevance of data are fundamental to the effective detection, prevention, and reporting of potential money laundering activities. High-quality data not only aids in identifying suspicious transactions more accurately but also enhances the efficiency of the compliance process. For CFOs, this means a significant reduction in the risk of non-compliance penalties and the fostering of trust with regulatory bodies. ... The advent of AI-boosted cyber threats poses a significant challenge for CFOs in 2024. Darktrace’s study reveals a stark reality: while 89% of IT security specialists anticipate these threats will significantly impact their organisations within the next two years, 60% admit to being ill-prepared to defend against them. The escalation in sophisticated phishing attacks, leveraging advanced language and punctuation, underscores the evolving nature of cyber threats. 


Prompt Injection Vulnerability in Google Gemini Allows for Direct Content Manipulation

The researchers say that the prompt injection attacks impact Gemini Advanced accessed by users with Google Workspace, and organizations that are making use of the Gemini API. The content manipulation risk is also said to more generally apply to world governments as it could be used to output inaccurate or falsified information about elections. The risk is particularly acute as Google Gemini has been trained on audio, video, images and code in addition to text. One of the central issues identified by the researchers is that it is relatively trivial to get Google Gemini to leak system prompt information. This is information about the “prime directives” of the AI model, so to speak, that should not be visible to service users. The researchers’ first prompt injection attack is to simply change the wording when asking the AI about this information, causing it to spit out its core rules when asked about its “foundational instructions” instead. Another exploit involving the system prompt is a seeming state of confusion that the AI can be thrown into by peppering it with many uncommon tokens.


RegTech solutions can be a game changer in fintech regulatory scrutiny

RegTech solutions allow businesses to create transparency and accountability within compliance procedures and ensure the timely conclusion of statutory obligations. Employers can stay on top of important changes and address them promptly with the help of compliance management software. Digital, authentic, and tamper-proof copies of all required compliance papers are stored conveniently. While onboarding any RegTech solution, the legal teams of the RegTech players conduct comprehensive compliance applicability assessments to identify the list of applicable acts and compliances. This helps in creating a list relevant to each financial institution.  RBI directives work to continuously evolve the regulatory landscape to keep up with innovations in technology and services. ... The RegTech space has been investing heavily in creating automation layers for compliance document generation and integration with the transaction systems to eliminate any manual touch points. Additionally, they are preparing themselves for API based filings as soon as the regulators are ready to adopt a GST like model and create an eco-system for RegTech players.  


Managing Technical Debt in Agile Environments

Usually, Technical debt occurs when teams rush to push new features within deadlines, by writing code without thinking about other considerations such as security, extensibility, etc. Over time, the tech debt increases and becomes difficult to manage. ... Code Debt: When we talk about tech debt, code debt is the first thing that comes to mind. It is due to bad coding practices, not following proper coding standards, insufficient code documentation, etc. This type of debt causes problems in terms of maintainability, extensibility, security, etc. Testing Debt: This occurs when the entire testing strategy is inadequate, which includes the absence of unit tests, integration tests, and adequate test coverage. This kind of debt causes us to lose confidence in pushing new code changes and increases the risk of defects and bugs surfacing in production, potentially leading to system failures and customer dissatisfaction. Documentation Debt: This manifests when documentation is either insufficient or outdated. It poses challenges for both new and existing team members in comprehending the system and the rationale behind certain decisions, thereby impeding efficiency in maintenance and development efforts.


Science Simplified: What Is Quantum Mechanics?

In a more general sense, the word ​“quantum” can refer to the smallest possible amount of something. The field of quantum mechanics deals with the most fundamental bits of matter, energy and light and the ways they interact with each other to make up the world. Unlike the way in which we usually think about the world, where we imagine things to have particle- or wave-like properties separately (baseballs and ocean waves, for example), such notions don’t work in quantum mechanics. Depending on the situation, scientists may observe the same quantum object as being particle-like or wave-like. For example, light cannot be thought of as only a photon (a light particle) or only a light wave, because we might observe both sorts of behaviors in different experiments. Day to day, we see things in one ​“state” at a time: here or there, moving or still, right-side up or upside down. The state of an object in quantum mechanics isn’t always so straightforward. For example, before we look to determine the locations of a set of quantum objects, they can exist in what’s called a superposition — or a special type of combination — of one or more locations. 



Quote for the day:

"Success comes from knowing that you did your best to become the best that you are capable of becoming." -- John Wooden

Daily Tech Digest - March 23, 2024

The tech tightrope: safeguarding privacy in an AI-powered world

The only means of truly securing our privacy is through proactive enforcement of the utmost secure and novel technological measures at our disposal, those that ensure a strong emphasis on privacy and data encryption, while still enabling breakthrough technologies such as generative AI models and cloud computing tools full access to large pools of data in order to meet their full potential. Protecting data when it is at rest (i.e., in storage) or in transit (i.e., moving through or across networks) is ubiquitous. The data is encrypted, which is generally enough to ensure that it remains safe from unwanted access. The overwhelming challenge is how to also secure data while it is in use. ... One major issue with Confidential Computing is that it cannot scale sufficiently to cover the magnitude of use cases necessary to handle every possible AI model and cloud instance. Because a TEE must be created and defined for each specific use case, the time, effort, and cost involved in protecting data is restrictive. The bigger issue with Confidential Computing, though, is that it is not foolproof. The data in the TEE must still be unencrypted for it to be processed, opening the potential for quantum attack vectors to exploit vulnerabilities in the environment.


Ethical Considerations in AI Development

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to guarantee better conditions for the development and use of this innovative technology. Parliament’s priority is to ensure that AI systems used in the EU are secure, transparent, traceable, non-discriminatory, and environmentally friendly. AI systems must be overseen by people, rather than automation, to avoid harmful outcomes. The European Parliament also wants to establish a uniform and technologically neutral definition of AI that can be applied to future AI systems. “It is a pioneering law in the world,” highlighted Von Der Leyen, who celebrates that AI can thus be developed in a legal framework that can be “trusted.” The institutions of the European Union have agreed on the artificial intelligence law that allows or prohibits the use of technology depending on the risk it poses to people and that seeks to boost the European industry against giants such as China and the United States. The pact was reached after intense negotiations in which one of the sensitive points has been the use that law enforcement agencies will be able to make of biometric identification cameras to guarantee national security and prevent crimes such as terrorism or the protection of infrastructure.


FBI and CISA warn government systems against increased DDoS attacks

The advisory has grouped typical DoS and DDoS attacks based on three technique types: volume-based, protocol-based, and application layer-based. While volume-based attacks aim to cause request fatigue for the targeted systems, rendering them unable to handle legitimate requests, protocol-based attacks identify and target the weaker protocol implementations of a system causing it to malfunction. A novel loop DoS attack reported this week targeting network systems, using weak user datagram protocol (UDP)-based communications to transmit data packets, is an example of a protocol-based DoS attack. This new technique is among the rarest instances of a DoS attack, which can potentially result in a huge volume of malicious traffic. Application layer-based attacks refer to attacks that exploit vulnerabilities within specific applications or services running on the target system. Upon exploiting the weaknesses in the application, the attackers find ways to over-consume the processing powers of the target system, causing them to malfunction. Interestingly, the loop DoS attack can also be placed within the application layer DoS category, as it primarily attacks the communication flaw in the application layer resulting from its dependency on the UDP transport protocol.


The Future of AI: Hybrid Edge Deployments Are Indispensable

Deploying AI models locally eliminates dependence on external network connections or remote servers, minimizing the risk of downtime caused by maintenance, outages or connectivity issues. This level of resilience is particularly critical in sectors like healthcare and other sensitive industries where uninterrupted service is absolutely critical. Edge deployments also ensure “low latency,” as the speed of light is a fundamental limiting factor, and there may be significant latency when accessing cloud infrastructure. With increasingly powerful hardware available at the edge, it enables the processing of data that is physically nearby. Another benefit is the ability to harness specialized hardware that is tailored to their needs, optimizing performance and efficiency while bypassing network latency and bandwidth limitations, as well as configuration constraints imposed by cloud providers. Lastly, edge deployments allow for the centralization of large shared assets within a secure environment, which in turn simplifies storage management and access control, enhancing data security and governance.


OpenTelemetry promises run-time "profiling" as it guns for graduation

This means engineers will be able “to correlate resource exhaustion or poor user experience across their services with not just the specific service or pod being impacted, but the function or line of code most responsible for it.” i.e. They won't just know when something falls down, but why; something commercial offerings can provide but the project has lacked. OpenTelemetry governance committee member, Daniel Gomez Blanco, principal software engineer at Skyscanner, added the advances in profiling raised new challenges, such as how to represent user sessions, and how are they tied into resource attributes, as well as how to propagate context from the client side, to the back end, and back again. As a result it has formed a new specialist interest group to tackle these challenges. Honeycomb.io director of open source Austin Parker, said: “We're right along the glide path in order to continue to grow as a mature project.” As for the graduation process, he said, the security audits will continue over the summer along with work on best practices, audits and remediation. They should complete in the fall: “We'll publish results along these lines, and fixes ,and then we're gonna have a really cool party in Salt Lake City probably.”


Fake data breaches: Countering the damage

Fake data breaches can hurt an organization’s security reputation, even if it quickly debunks the fake breach. Whether real or fake, news of a potential breach can create panic among employees, customers, and other stakeholders. For publicly traded companies, the consequences can be even more damaging as such rumors can degrade a company’s stock value. Fake breaches also have direct financial consequences. Investigating a fake breach consumes time, money, and security personnel. Time spent on such investigations can mean time away from mitigating real and critical security threats, especially for SMBs with limited resources. Some cybercriminals might deliberately create panic and confusion about a fake breach to distract security teams from a different, real attack they might be trying to launch. Fake data breaches can help them gauge the response time and protocols an organization may have in place. These insights can be valuable for future, more severe attacks. In this sense, a fake data breach may well be a “dry run” and an indicator of an upcoming cyber-attack.


CISOs: Make Sure Your Team Members Fit Your Company Culture

Cybersecurity is not a solitary endeavor; it's a collective fight against common adversaries. CISOs can enhance their teams' capabilities by fostering collaboration both within the organization and with external communities. Internally, promoting a security-aware culture across all departments can empower employees to be the first line of defense. Externally, participating in industry forums, sharing threat intelligence with peers and engaging in public-private partnerships can provide access to shared resources, insights and best practices. These collaborations can extend a team's reach and effectiveness beyond its immediate members. Diversifying recruitment efforts can help uncover untapped talent pools. Initiatives aimed at increasing the participation of underrepresented groups in cybersecurity, such as women and veterans, can broaden the range of candidates. CISOs should also look beyond traditional recruitment channels and explore alternative sources such as hackathons, cybersecurity competitions and online communities.


Architecting for High Availability in the Cloud with Cellular Architecture

Cellular architecture is a design pattern that helps achieve high availability in multi-tenant applications. The goal is to design your application so that you can deploy all of its components into an isolated "cell" that is fully self-sufficient. Then, you create many discrete deployments of these "cells" with no dependencies between them. Each cell is a fully operational, autonomous instance of your application ready to serve traffic with no dependencies on or interactions with any other cells. Traffic from your users can be distributed across these cells, and if an outage occurs in one cell, it will only impact the users in that cell while the other cells remain fully operational. ... one of the goals of cellular architecture is to minimize the blast radius of outages, and one of the most likely times that an outage may occur is immediately after a deployment. So, in practice, we’ll want to add a few protections to our deployment process so that if we detect an issue, we can stop deploying the changes until we’ve resolved it. To that end, adding a "staging" cell that we can deploy to first and a "bake" period between deployments to subsequent cells is a good idea.


Swift promotes the concept of a universal shared ledger. But based on messaging

While many of Swift’s points are perfectly valid, in our view, this demonstrates the classic conundrum of how incumbents respond to innovation. Swift could make sense as the operator of some of these shared ledgers. Likewise, incumbent central depositories (CSDs) might be the logical operators for securities ledgers. ... “By leveraging existing components of the financial system that already work well together – including secure financial messaging such as that provided by Swift – the industry can avoid undue levels of market concentration risk, and draw upon tried-and-tested practices to deliver the rich, structured data that it has been working towards for decades.” It continues, “Rather than having each institution record its own individual ‘state’, that function could be abstracted and performed at an industry level, similar to how messaging evolved. Such a state machine could be built on more decentralised blockchain technology, or equally a more centralised platform like Swift’s Transaction Manager could be enhanced for this use.”


The AI Advantage: Mitigating the Security Alert Deluge in a Talent-Scarce Landscape

Security teams are still struggling with an overflow of alerts. The report found that an average of 9,854 false positives arise weekly, wasting valuable time and resources as analysts investigate these non-issues. Moreover, undetected threats present an even more significant concern. The average organization fails to identify a staggering 12,009 threats each week, leaving vulnerabilities exposed. Imagine this: you’re a cybersecurity analyst tasked with safeguarding your organization’s attack surface. But instead of strategically deploying defenses, you’re buried under an avalanche of security alerts. Thousands of alerts bombard your console daily, a relentless barrage threatening to consume your entire workday. This overwhelming volume is the reality for many security analysts. While security tools play a crucial role in detection, they often generate many false positives – harmless activities mistaken for threats. These false alarms are like smoke detectors going off whenever you toast a bagel, forcing you to waste time investigating non-issues. The consequences are dire, as exhausted analysts are more likely to miss genuine threats amidst the noise.


AWS CISO: Pay Attention to How AI Uses Your Data

AI users always need to think about whether they're getting quality responses. The reason for security is for people to trust their computer systems. If you're putting together this complex system that uses a generative AI model to deliver something to the customer, you need the customer to trust that the AI is giving them the right information to act on and that it's protecting their information. ... With strong foundations already in place, AWS was well prepared to step up to the challenge as we've been working with AI for years. We have a large number of internal AI solutions and a number of services we offer directly to our customers, and security has been a major consideration in how we develop these solutions. It's what our customers ask about, and it's what they expect. As one of the largest-scale cloud providers, we have broad visibility into evolving security needs across the globe. The threat intelligence we capture is aggregated and used to develop actionable insights that are used within customer tools and services such as GuardDuty. In addition, our threat intelligence is used to generate automated security actions on behalf of customers to keep their data secure.



Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe

Daily Tech Digest - March 22, 2024

Why adversarial AI is the cyber threat no one sees coming

Don’t settle for doing red teaming on a sporadic schedule, or worse, only when an attack triggers a renewed sense of urgency and vigilance. Red teaming needs to be part of the DNA of any DevSecOps supporting MLOps from now on. The goal is to preemptively identify system and any pipeline weaknesses and work to prioritize and harden any attack vectors that surface as part of MLOps’ System Development Lifecycle (SDLC) workflows. ... Have a member of the DevSecOps team staying current on the many defensive frameworks available today. Knowing which one best fits an organization’s goals can help secure MLOps, saving time and securing the broader SDLC and CI/CD pipeline in the process. Examples include the NIST AI Risk Management Framework and OWASP AI Security and Privacy Guide​​. ... Consider using a combination of biometrics modalities, including facial recognition, fingerprint scanning, and voice recognition, combined with passwordless access technologies to secure systems used across MLOps. Gen AI has proven capable of helping produce synthetic data. 


300K Internet Hosts at Risk for 'Devastating' Loop DoS Attack

The attack exploits a novel traffic-loop vulnerability present in certain user datagram protocol (UDP)-based applications, according to a post by the Carnegie Mellon University's CERT Coordination Center. An unauthenticated attacker can use maliciously crafted packets against a UDP-based vulnerable implementation of various application protocols such as DNS, NTP, and TFTP, leading to DoS and/or abuse of resources. ... The researchers put the attack on par with amplification attacks in the volumes of traffic they can cause, with two major differences. One is that attackers do not have to continuously send attack traffic due to the loop behavior, unless defenses terminate loops to shut down the self-repetitive nature of the attack. The other is that without a proper defense, the DoS attack will likely continue for a while. Indeed, DoS attacks are almost always about resource consumption in Web architecture, but until now it's been extremely tricky to use this type of attack to take a Web property completely offline because "you have to have systems smart enough to gather an army of hosts that will call upon the victim web architecture all at once," explains Jason Kent.


Security Flaw Can Open Over 3 Million Door Locks, Mainly at Hotels

The researchers have not released technical details to prevent hackers from exploiting the threat. Nevertheless, the vulnerability is relatively easy for a bad actor to abuse. “An attacker only needs to read one keycard from the property to perform the attack against any door in the property. This keycard can be from their own room, or even an expired keycard taken from the express checkout collection box,” they wrote. ... The vulnerability affects all locks under the Saflok brand, including the Saflok MT, the Quantum Series, the RT Series, the Saffire Series and the Confidant Series, among others. Unfortunately, it’s impossible for a hotel guest to visually tell if a lock has been patched, the researchers say. Whether anyone else knows about the flaw remains unclear. ... In a statement, Dormakaba confirmed that the flaw exists. "As soon as we were made aware of the vulnerability by a group of external security researchers, we initiated a comprehensive investigation, prioritized developing and rolling out a mitigation solution, and worked to communicate with customers systematically," the company said.


Quantum talk with magnetic disks

While much attention has been directed towards the computation of quantum information, the transduction of information within quantum networks is equally crucial in materializing the potential of this new technology. Addressing this need, a research team at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) is now introducing a new approach for transducing quantum information. The team has manipulated quantum bits, so-called qubits, by harnessing the magnetic field of magnons—wave-like excitations in a magnetic material—that occur within microscopic magnetic disks. The researchers have presented their results in the journal Science Advances. The construction of a programmable, universal quantum computer stands as one of the most challenging engineering and scientific endeavors of our time. The realization of such a computer holds great potential for diverse industry fields such as logistics, finance, and pharmaceutics. However, the construction of a practical quantum computer has been hindered by the intrinsic fragility of how the information is stored and processed in this technology. 


Relational Data at the Edge: How Cloudflare Operates Distributed PostgreSQL Clusters

The team opted for stolon, an open-source cluster management system written in Go, running as a thin layer on top of the PostgreSQL cluster, with a PostgreSQL-native interface and support for multiple-site redundancy. It is possible to deploy a single stolon cluster distributed across multiple PostgreSQL clusters, whether within one region or spanning multiple regions. Stolon's features include stable failover with minimal false positives, with the Keeper Nodes acting as the parent processes managing PostgreSQL changes. Sentinel Nodes function as orchestrators, monitoring Postgres components' health and making decisions, such as initiating elections for new primaries. ... Cloudflare chose PgBouncer as the connection pooler due to its compatibility with PostgreSQL protocol: the clients can connect to PgBouncer and submit queries as usual, simplifying the handling of database switches and failovers. PgBouncer, a lightweight single-process server, manages connections asynchronously, allowing it to handle more concurrent connections than PostgreSQL.


Downtime Cost of Cyberattacks and How to Reduce It

“IT leaders and other business decision makers must think critically about their support teams, identifying and encouraging continual upskilling via real-world scenarios to mirror the threats they’re likely to experience,” Hynes advises. "Staying skilled in parallel to increasingly complex and intelligent cyberattacks can make recovery more efficient and alleviate unnecessary downtime that puts the company reputation and stakeholder relationships at risk.” This will often necessitate de-siloing an organization. As one paper observes, cybersecurity analysts are sometimes not looped into continuity plans, making those plans next to worthless when something actually happens. Conversely, analysts do not necessarily share the findings of their risk assessments with the necessary departments. So, nobody can plan accordingly. As previously referenced, planning for alternate means of communication, whether it be in a hospital or in another business, is crucial. Ensuring that an immediate fallback to typical communication channels is in place will almost assuredly save time in the event of an attack.


Are cobots collaborative enough to protect your cyberspace?

Critics argue that the rise of collaborative robots in modern manufacturing brings about cybersecurity concerns. These sophisticated machines, integrated with advanced sensors and AI, work alongside humans in shared spaces, posing risks that must be addressed. Unauthorised access to sensitive data in cobots can lead to intellectual property theft and operational disruptions, while cyber attackers manipulating cobot programming can cause product and equipment damage and physical harm to workers. “Moreover, disabling safety mechanisms through cyber attacks exposes workers to injury risks. To safeguard human labour in collaborative environments, comprehensive cybersecurity strategies are imperative. This entails regular software updates, encryption methods, and continuous monitoring for swift responses to potential breaches,” Vineet Kumar, global president and founder, CyberPeace, a non-profit organisation of cybersecurity explained. Experts believe that the firmware, controlling lower-level operations such as sensors and actuators, is often updated over the air, leaving robots vulnerable to penetration through the network or its peripherals.


Legal Issues for Data Professionals: AI Creates Hidden Data and IP Legal Problems

From a legal perspective, the core risks are a) that analytics run on the database will disclose customer information in violation of the confidentiality agreement and b) that the use of the customer information could be outside the scope of permitted use. It is common for confidentiality and nondisclosure obligations to be integrated into the governing agreement and not in a standalone nondisclosure agreement (NDA). Further, in many industries, the terms of customer confidentiality will be tailored specifically to the transaction and the agreement. As a result, it requires both business and legal analysis to determine the permitted, the prohibited, and the gray areas for scope of use. It is important to note that the company and the customer entered into an NDA at the beginning of the transaction or before the transaction. The NDA may have different terms than the final agreement, but both the NDA and the agreement, with their different terms, will be in the database. In addition, a company’s use of confidential information outside its permitted scope of use constitutes a breach of contract and could result in liability and the award of monetary damages against the company. 


CDOs, data science heads to fill Chief AI Officer positions in India

The refactoring of C-level technology roles across Indian enterprises, according to CK Birla Hospitals’ CIO Mitali Biswas, can be chalked up to the dearth of talent or skills presently available to take on the responsibilities for the role or create an efficient team under that position. “While larger enterprises may still want to create a new position and a team around it, small and medium businesses will look up to their existing technology leaders, such as the CIO or the CTO or the CDO to take up the CAIO mantle,” Biswas explained, adding that maturity and pervasiveness of the CAIO role, at least in the Indian healthcare sector, is two to three years away. Santanu Ganguly, who is the CEO of advisory firm StrategINK, said he believes that other industry sectors, including healthcare, will see the role of CAIO being adopted in the next one to three years, driven by the boards and CEOs’ agenda of shaping the future of customer-centricity, offering innovation, enhanced productivity & efficient operations. Along the same lines, Gaurav Kataria, vice president of digital manufacturing and CDIO at PSPD, ITC Limited said that the evolution of the CAIO role is already happening in India.


The Changing Face of Consumption and End User Experience

IT architecture has evolved through several distinct epochs that supported the evolution of business technology. This business technology shift is more than a “consumption gap … the idea that technology companies can add features and complexity to their products much faster than consumers can consume them.” Indeed, we have seen a trend that argues that increased technology results in improved flexibility and intuitive product use. As Steve Jobs said, “Simple can be harder than complex: You must work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains.” As business technology evolves, it delivers simplicity with more flexible consumption models that support building a more intuitive and contextual end-user experience. The architectural skills that support this evolution are also changing. This article discusses how those IT architecture skills are evolving. It suggests how the architecture toolkit for the future is also evolving so that we can continue to evolve technologies that “can make life easier … [we] touch the right people. [with] things [that] can profoundly influence life.”



Quote for the day:

"Holding on to the unchangeable past is a waste of energy, and serves no purpose in creating a better future." -- Unknown

Daily Tech Digest - March 21, 2024

India’s digital healthcare program promises to democratise healthcare for all

At an advanced level, AI-powered clinical decision support systems riding on the back of EHR systems could help physicians manage a much higher patient workload without compromising on the quality or accuracy of their diagnostics. This could be transformational for India’s universal healthcare goal given the sheer scarcity of qualified doctors in our country. Similarly, remote monitoring solutions integrated with EHR will have the capability to scrutinize health data generated by patients through wearables. This can facilitate the prompt identification of potential health issues, alerting healthcare providers and enabling timely interventions. Going further, the application of predictive analytics on anonymised aggregate-level data stands poised to significantly contribute to the identification and mitigation of large at-risk populations through proactive preventive measures. This sophisticated application paves the way for a healthcare strategy that is comprehensive and yet targeted at the most vulnerable communities. The combination of AI and EHR will open whole new possibilities. 


API-first observability for the API era

In order for the developer not to have to keep logs, dashboards, and alerts up-to-date with every new change to the app, there needs to be a basic level of production visibility that is both automatic and covers the entire system. ... If today’s logs and error alerts are too overwhelming, we’re going to need to move up the stack. The best candidate: APIs. Not only are APIs a functional boundary for the software, but they often demarcate an organizational boundary, the hand-off point between one team and another. Monitoring at the API level means identifying the issues that are most likely to impact others and matching the monitoring to the boundaries of responsibility. Getting comprehensive metrics across API endpoints allows teams to get ahead of alerts and quickly determine who may be responsible for investigations and fixes. ... Modern software exists as a collage across tech stacks and partial migrations, where only some of the software is paged in by the current members of the organization. In order for a monitoring approach to be comprehensive, it needs to easily work across programming languages and frameworks, without requiring changes to the underlying system.


Chasing the Tech in Architect

While much of our focus for the BTABoK originally was making a suite of techniques for architecture teams at the practice and strategy level, it is also extremely important that we provide reusable tools at the execution level and that these tools be taken into account and traceable to the more outcome, strategy and stakeholder side of the BTABoK. Only then will the knowledge properly connect all practitioners of architecture into a unified and effective practice. The BTABoK describes architecture, engineering/delivery as separate professions and trades. Architecture and engineering have overolapping responsibilities and competencies in certain areas but they remain separate. This policy best supports the individual value of both and evidence (surveys and interviews) suggest that while other configurations are possible, a program work is more successful if both professions are present. Meaning a lead engineer and a lead architect create better outcomes than when their is only one. However much of this is context sensitive and much more research is necessary in the space before we get conclusive recommendations.


Sustainable Data Management: An Overview

Optimizing data storage and cloud computing sustainability is crucial in shaping a greener digital future. To achieve this, several key steps can be taken. Firstly, organizations should prioritize the implementation of energy-efficient data centers. These centers can significantly reduce power consumption by utilizing advanced cooling techniques, virtualization technologies, and renewable energy sources. Secondly, adopting a comprehensive data management strategy is essential. This involves consolidating and organizing data to minimize redundancy and improve storage efficiency. ... The circular economy focuses on reducing waste and maximizing resource efficiency. In data management, this entails implementing strategies such as refurbishing and reusing outdated equipment, promoting recycling programs for electronic devices, and responsibly disposing of hazardous materials. By implementing these measures, organizations can not only minimize their ecological footprint but also reduce costs associated with purchasing new hardware and comply with regulations related to e-waste management.


New Gmail Security Rules—You Have 14 Days To Comply, Google Says

Google has been making it explicit since October 2023 that new email sender authentication rules will result in some messages to Gmail accounts being rejected and bounced back to the sender en masse. Neil Kumaran, a Google group product manager responsible for Gmail security and trust, announced that “starting in 2024, we’ll require bulk senders to authenticate their emails, allow for easy unsubscription and stay under a reported spam threshold.” Some of these new protections are scheduled to start in 14 days and will impact every holder of a personal Gmail account in a very positive way. ... Although Google does appear to be taking a slow and steady approach to the new rules for bulk email senders to Gmail accounts, you can expect things to start ramping up from April 1. “Starting in April 2024, we’ll begin rejecting non-compliant traffic,” Google has stated in an email sender guidelines FAQ, continuing, “we strongly recommend senders use the temporary failure enforcement period to make any changes required to become compliant.”


Supercomputing’s Future Is Green and Interconnected

Well, we are building a new machine with 96 GPUs, these will be the SXM5s, water-cooled NV-linked devices. We will know soon if they will have better performance. As I mentioned, they may be faster, but they may not be more efficient. But, one thing we found with our A100s was that most of the performance is available in the first half the wattage, so you get 90 percent of the performance in the first 225 Watts. So, one of the things that we’re going to try with the water-cooled system is to run it in power capped mode, and see what kind of performance we get. One nice thing about the water-cooled version is that it doesn’t need fans, because the fans count against your wattage. When these units are running, it’s about four kilowatts of power per three units of space (3U). So it’s like forty 100 watt light bulbs in a small box. Cooling that down requires blowing a tremendous amount of air across it, so you can have a few 100 watts of fans. And with water cooling, you just have a central pump, which means significant savings. The heat capacity of water is about 4000 times the heat capacity of air by volume, so you have to use a lot less of it. 

5 Ways CISOs Can Navigate Their New Business Role

CISOs can’t afford to not pay attention to their data breach liability: A breakdown from the firm of the top 35 breaches across the world in 2023 found that organizations paid almost $2.6 billion in fines for exposing 1.5 billion records, with almost half of the breaches happening at public agencies and healthcare-related industries. Among this list were breaches at many of the world's largest telecommunications providers. Out of the top 35 breaches, all but one happened in the European Union and US. ... Further, transparency should be a natural part of a CISO's playbook, not just something that is activated in post-breach situations. Part of the motivation is compliance, as Forrester analysts noted. "Regulators are pushing for greater transparency," they wrote. ... In general, CISOs need to "own it, recognize where things went wrong, and proactively work to fix them, including as many stakeholders as possible to ensure you fix the root cause and identify any other issues that may have been missed," Shier says. "This is especially true now that CISOs are increasingly being held personally accountable for issues that may arise from corporate negligence or security issues that were persistent, known, and not mitigated."

Invest in human capital to create a dynamic, resilient workforce fit for the future

Competition, changing demographics, and evolving skill requirements are some of the challenges in the insurance industry. At Ageas Federal, our 4G Employee Value Proposition (EVP) provides an opportunity for employees to be part of the transformation journey in the company and helps address the aforementioned retention challenges. Employees gain unmatched rewards through competitive remuneration, gratuity payouts, attractive incentives, and pre-defined increments. They also receive guaranteed unique benefits such as healthcare, overall wellbeing of themselves and their families, wellness programs, life cover, and various types of leaves and allowances. Employees add glory to their careers through recognition programs like star of the month, galaxy awards, and leadership awards, fostering a culture of excellence. Lastly, employees have opportunities to learn and grow through managerial development programs, structured career progression, and self-paced learning programs, ensuring continuous development and skill enhancement.


Importance of data privacy and security measures for secure digital learning

With the ever-increasing use of online platforms in education, encryption and secure communication protocols are essential for safeguarding confidential information shared in virtual classrooms, discussions and collaborative projects. The digital environment is ever-changing and cyber threats are constantly evolving. It is essential for educational institutions to remain vigilant, update security measures regularly and remain informed of new threats. Adapting to the ever-changing threat landscape is necessary to identify and address potential vulnerabilities in a timely manner. Therefore, at the end of the day, data protection and security in digital learning isn't just a technical thing, it's an ethical and strategic necessity. Schools that focus on these issues not only protect sensitive information, but they also create a culture of trust, responsibility and academic success. As technology changes the way we learn, a strong focus on data protection and security will be the foundation of a strong and safe digital learning environment. As we move into the digital era and incorporate technology into our educational systems, it is essential to recognise and prioritize the safeguarding of student data.


Email Bomb Attacks: Filling Up Inboxes and Servers Near You

The measures include implementing reCAPTCHA technology to determine if a human - or bot - is attempting to use a platform. "Email bombing bots are generally unable to bypass a reCAPTCHA, which would prevent them from signing up" for a registration or other service that might help facilitate a massive email bomb attack. Users should be trained to avoid using work email addresses to subscribe to nonwork-related services and limit their online exposure to direct email addresses by using contact forms that do not expose email addresses. "Given the potential implications of such an attack on the HPH sector, especially concerning unresponsive email addresses, downgraded network performance and potential downtime for servers, this type of attack remains relevant to all users," HHS HC3 said. "Email bomb attacks are potentially disruptive and can impact the HPH through denial of services where email is a critical part of the business or clinical workflow," said Dave Bailey, vice president of consulting services at security and privacy consultancy Clearwater. .



Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie

Daily Tech Digest - March 20, 2024

How to deploy software to Linux-based IoT devices at scale

IoT development may be so nascent that it may not yet be part of your mainstream DevOps processes—you may still be in the early stages of experimentation. Once you’re ready to scale, you’ll need to bring IoT into the DevOps fold. Needless to say, the scale and costs of dealing with‌ thousands of deployed devices are significant. DevOps is an important approach for ensuring the seamless and efficient delivery of software development, updates, and enhancements to IoT devices. By integrating IoT development into an established workflow, you’ll gain the improved collaboration, agility, assured delivery, control, and traceability that’s part of a modern DevOps process. It’s critical to use a secure deployment process to protect your IoT devices from unauthorized access, inadvertent vulnerabilities, and malware. A secure deployment must include strong authentication methods to access the devices and the management platform. The data that is transmitted between the devices and the management platform should be protected by encryption. The manner in which the client devices connect to the platform after deployment should always be encrypted as well.


In 5 Years, Coding will be Done in Natural Language

“Because AI is a tool,” he adds, that people should be able to operate at a higher level of abstraction and become way more efficient at the job they do. Eventually, everyone is likely to be coding in the natural language, but that wouldn’t necessarily make them a software engineer or a programmer. The skills required to be a coder are far more complex than being able to put prompts in an AI tool, copying the code, or merely typing in natural language. ... Soon, there would be a programming language exclusively in our very own English language. Not to be confused with prompt engineering and writing code, the term natural language programming means that most of the coding would be done by the software in the backend. The programmer would only have to interact with the tool in English, or any other language and never even look at the code. On the contrary, a few experts believe that English cannot be a programming language because it is filled with misunderstandings. “If they’re going into machines, which will affect the lives of people, we can’t afford that level of comedy,” said Douglas Crockford when talking to AIM. 


Cybersecurity's Future: Facing Post-Quantum Cryptography Peril

The post-quantum cryptography era might not be open season on unprepared systems, he says, but rather an uneven landscape. There are layers of concerns to consider. “What I think scares me a little bit is that this type of attack is somewhat quiet,” Ho says. “The people who are going to be taking advantage of this -- the few people initially who have quantum computers as you can imagine, probably state actors -- will want to keep this on the downlow. You wouldn’t know it, but they probably already have access.” ... “From a technical perspective … being quantum-safe -- it’s a binary thing. You either are or you’re not,” says Duncan Jones, head of quantum cybersecurity with quantum computing company Quantinuum. If there is a particular computer system that an organization fails to migrate to new standards and protocols, he says that system will be vulnerable. However, the barrier to entry for access to quantum compute resources may limit the potential for early attackers who already have pockets deep enough to procure the technology. 


The New CISO: Rethinking the Role

CISOs need to be negotiators. They need to argue in favor of stronger security and convince boards and business units of the risks in terms they understand. How a CISO goes about this can vary, depending on whether board members' experience is in technology or business. Providing a demonstration that puts the technical risk into a business perspective can be helpful. CISOs should also talk with other C-level executives — as well as CISOs from other industries — to get advance buy-in and different perspectives on similar conversations they're having with their boards. ... CISOs need to be comfortable developing a risk-based approach focusing on the importance of resiliency, because attackers will get in. Developing a tested plan to respond to attacks is just as important as implementing preventative measures. … it's balancing the risk with the cost. ... CISOs should build a deeply technical team that can focus on key security practices. They should run tabletop exercises on scenarios such as a system shutdown or inability to connect to the Internet. CISOs must not rely on assumptions about how to respond; running through and testing all response plans is vital.


Architect’s Guide to a Reference Architecture for an AI/ML Data Lake

If you are serious about generative AI, then your custom corpus should define your organization. It should contain documents with knowledge that no one else has, and should only contain true and accurate information. Furthermore, your custom corpus should be built with a vector database. A vector database indexes, stores and provides access to your documents alongside their vector embeddings, which are the numerical representations of your documents. ... Another important consideration for your custom corpus is security. Access to documents should honor access restrictions on the original documents. (It would be unfortunate if an intern could gain access to the CFO’s financial results that have not been released to Wall Street yet.) Within your vector database, you should set up authorization to match the access levels of the original content. This can be done by integrating your vector database with your organization’s identity and access management solution. At their core, vector databases store unstructured data. Therefore, they should use your data lake as their storage solution.


5 ways private organizations can lead public-private cybersecurity partnerships

One tangible step that cybersecurity stakeholders can take is to build the bottom-up infrastructure that can meet JCDC’s top-down strategic vision as it attempts to descend into tactical usefulness. This can be done by encouraging the development of volunteer civil cyber defense organizations while simultaneously lobbying the federal government for support of these entities. This kind of volunteer service model is an incredibly cost-efficient way to boost national defense, save federal government resources, and assure private stakeholders about their independence. ... Unfortunately, as criticism of the JCDC emphasizes, top-down P3 efforts often fail to effectively do so due to the role of strategic parameters driving derivative mission parameters. If industry is to shape P3 cyber initiatives CISA’s more clearly toward alignment with practical tactical considerations, mapping out where innovation and adaptation comes from in the interaction of key individuals spread across a complex array of interacting organizations (particularly during a crisis) becomes a critical common capacity.


Decoding tomorrow’s risks: How data analytics is transforming risk management

With digital technologies coming in, corporations can make use of data analytics to ensure goals correlate with their strategic needs. ... Talking about the different risk management strategies, data analytics can contribute towards optimisation models, which directs data-backed resource deployment towards risk mitigation, scenario analysis, which recreates likely circumstances to calculate the effectiveness of different risk mitigation applications, and personalised answers, which supplies custom-fit replies towards certain market conditions. ... “I believe the role of data analytics in risk mitigation has become paramount, enabling organisations to make decisions based on data-driven insights. By leveraging advanced analytics techniques, such as predictive modelling and ML, we can anticipate threats and take measures to mitigate them. From a business perspective, data analytics is considered indispensable in risk management as it helps organisations identify, assess, and prioritise risks. Companies that leverage data analytics in risk management can gain an edge by minimising disruptions, maximising opportunities, and safeguarding their reputation,” Yuvraj Shidhaye, founder and director, TreadBinary, a digital application platform, mentioned.


How AI-Driven Cyberattacks Will Reshape Cyber Protection

Aside from adaptability and real-time analysis, AI-based cyberattacks also have the potential to cause more disruption within a small window. This stems from the way an incident response team operates and contains attacks. When AI-driven attacks occur, there is the potential to circumvent or hide traffic patterns. This is somewhat similar to criminal activity, where fingerprints are destroyed. Of course, the AI methodology is to change the system log analysis process or delete actionable data. Perhaps having advanced security algorithms that identify AI-based cyberattacks is the answer. ... AI has introduced challenges where security algorithms must become predictive, rapid and accurate. This reshapes cyber protection because organizations' infrastructure devices must support the methodologies. It's no longer a concern where network intrusions, malware and software applications are risk factors, but rather how AI transforms cyber protection. The shield is not broken. It requires a transformation practice for AI-based attacks.


Four easy ways to train your workforce in cybersecurity

Do your employees install all kinds of random apps and programs? Do the same thing as the phishing emails: create your own dodgy software that locks the employee's computer, blast it out to the employee database, and see who falls for it. When they have to bring their IT assets in to be unlocked and get a scolding for installing suspicious material, however harmless, the lesson will stick. ... Cyber attacks soar during festive seasons, like the upcoming Holi holiday. Set up automated reminders to your employees to remind them not to blindly open greeting mails or click on suspicious links. You can track the open and read rate of these messages to get an idea of whether people are actually paying attention. ... If your IT team is savvy and has some time to spare, they can use generative AI to create fake personas – someone from another department, a vendor, or a customer – and see if these fake personas can fool people into giving away information they should be keeping confidential. This is particularly important, because many cyber criminals today are already using generative AI to scam unwitting victims. 


Report: AI Outpacing Ability to Protect Against Threats

There are "two sides of the coin" when it comes to AI adoption, said Greg Keller, CTO at JumpCloud. Employee productivity and technology stacks being embedded into SaaS solutions are "the new frontier," he said. "Yet there are universal security concerns. There is fear of commingling or escaping of your data into public sectors. And there is a fear of using one's data on an AI platform. CTOs are concerned about their data leaking through public LLMs," Keller said. ... "We're at the tail end of understanding digital transformation. Now, we are beginning the first phase of the identity transformation. These companies have done an amazing amount of work to lift and shift their technology stacks from legacy into the cloud with one exception - overwhelmingly, it's the Microsoft Active Directory problem," Keller said. "That's still on-premises or self-managed. So they're looking at ways to modernize this. We are in the earliest phases of security shifting away from endpoint-based [security]. Now, it's about understanding access control through the identity, and this is the new frontier."



Quote for the day:

"Whatever you can do, or dream you can, begin it. Boldness has genius, power and magic in it." --Johann Wolfgang von Goethe