Daily Tech Digest - October 04, 2024

Over 80% of phishing sites now target mobile devices

M-ishing was highlighted to be the top security challenge plaguing the mobile space, both in the public sector (10%) and the private sector, and more importantly, 76% of phishing sites are now using HTTP, giving users a false sense of communication protocol. “Phishing using HTTPS is not completely new,” Krishna Vishnubhotla, vice President for product strategy at Zimperium. “Last year’s report revealed that, between 2021 and 2022, the percentage of phishing sites targeting mobile devices increased from 75% to 80%. Some of them were already using HTTPS but the focus was converting campaigns to target mobile.” “This year, we are seeing a meteoric rise in this tactic for mobile devices, which is a sign of maturing tactics on mobile, and it makes sense. The mobile form factor is conducive to deceiving the user because we rarely see the URL in the browser or the quick redirects. Moreover, we are conditioned to believe a link is secure if it has a padlock icon next to the URL in our browsers. Especially on mobile, users should look beyond the lock icon and carefully verify the website’s domain name before entering any sensitive information,” Vishnubhotla said.


How GPT-4o defends your identity against AI-generated deepfakes

OpenAI’s latest model, GPT-4o, is designed to identify and stop these growing threats. As an “autoregressive omni model, which accepts as input any combination of text, audio, image and video,” as described on its system card published on Aug. 8. OpenAI writes, “We only allow the model to use certain pre-selected voices and use an output classifier to detect if the model deviates from that.” Identifying potential deepfake multimodal content is one of the benefits of OpenAI’s design decisions that together define GPT-4o. Noteworthy is the amount of red teaming that’s been done on the model, which is among the most extensive of recent-generation AI model releases industry-wide. All models need to constantly be training on and learning from attack data to keep their edge, and that’s especially the case when it comes to keeping up with attackers’ deepfake tradecraft that is becoming indistinguishable from legitimate content. ... GANs most often consist of two neural networks. The first is a generator that produces synthetic data (images, videos or audio) and a discriminator that evaluates its realism. The generator’s goal is to improve the content’s quality to deceive the discriminator. This advanced technique creates deepfakes nearly indistinguishable from real content.


The 4 Evolutions of Your Observability Journey

Known unknowns can be used to describe the second stage. They fit because we’re looking at things we know we don’t know, but we’re trying to see how well we can develop the understanding of those unknowns, whereas if these were unknown unknowns, we wouldn’t even know where to start. If the first stage is where most of your observability tooling lies, then this is the era of service-level objectives (SLOs); this is also the stage where observability starts being phrased in a “yes, and” manner. … Having developed the ability to figure out that you can ask questions about what happened in a system in the past, you’re probably now primarily concerned with statistical questions and developing more comprehensive correlations. ... Additionally, one of the most interesting developments here is when your incident reports change: They stop becoming concerned about what happened and start becoming concerned with how unusual or surprising it was. You’re seeing first hand this stage of the observability journey in action if you’ve ever read a retrospective that said something like, “We were surprised by the behavior, so we dug in. Even though our alerts were telling us that this other thing was the problem, we investigated the surprising thing first.”


Be the change you want to see: How to show initiative in the workplace

At one point or another, all of us are probably guilty of posing a question without offering a solution. Often we may feel that others are more qualified to address an issue than we are and as long as we bring the matter to someone’s attention, then that’s as far as we need go. While this is well and good – and certainly not every scenario can be dealt with single-handedly – it can be good practice to brainstorm ideas for the problems you identify. It’s important to loop people in and utilise the expertise of others, but you should also have confidence in your ability to tackle an issue. Identifying the problem is half the battle, so why not keep going and see what you come up with? ... Some are born with confidence to spare and some are not, luckily it is a skill that can be learned over time. Working on improving your confidence level, being more vocal and presenting yourself as an expert in your field are crucial to improving your ability to show initiative, as it means you are far more likely to take the reins and lead the way. Taking the initiative or going out on a limb, in many scenarios, can be nerve-wracking and you may doubt that you are the best person for the job. 


What is RPA? A revolution in business process automation

RPA is often touted as a mechanism to bolster ROI or reduce costs, but it can also be used to improve customer experience. For example, enterprises such as airlines employ thousands of customer service agents, yet customers are still waiting in queues to have their calls fielded. A chatbot could help alleviate some of that wait. ... COOs were some of the earliest adopters of RPA. In many cases, they bought RPA and hit a wall during implementation, prompting them to ask for IT’s help (and forgiveness). Now citizen developers without technical expertise are using cloud software to implement RPA in their business units, and often the CIO has to step in and block them. Business leaders must involve IT from the outset to ensure they get the resources they require. ... Many implementations fail because design and change are poorly managed, says Sanjay Srivastava, chief digital officer of Genpact. In the rush to get something deployed, some companies overlook communication exchanges between the various bots, which can break a business process. “Before you implement, you must think about the operating model design,” Srivastava says. “You need to map out how you expect the various bots to work together.” 


Best practices for implementing threat exposure management, reducing cyber risk exposure

Threat exposure management is the evolution of traditional vulnerability management. Several trends are making it a priority for modern security teams. An increase in findings that overwhelm resource-constrained teams As the attack surface expands to cloud and applications, the volume of findings is compounded by more fragmentation. Cloud, on-prem, and AppSec vulnerabilities come from different tools. Identity misconfigurations from other tools. This leads to enormous manual work to centralize, deduplicate, and prioritize findings using a common risk methodology. Finally, all of this is happening while attackers are moving faster than ever, with recent reports showing the median time to exploit a vulnerability is less than one day! Threat exposure management is essential because it continuously identifies and prioritizes risks—such as vulnerabilities and misconfigurations—across all assets, using the risk context applicable to your organization. By integrating with existing security tools, TEM offers a comprehensive view of potential threats, empowering teams to take proactive, automated actions to mitigate risks before they can be exploited. 


Understanding VBS Enclaves, Windows’ new security technology

Microsoft recently extended its virtualization-based security model to what it calls VBS Enclaves. If you’ve looked at implementing confidential computing on Windows Server or in Azure, you’ll be familiar with the concept of enclaves, using Intel’s SGX instruction set to lock down areas of memory, using them as a trusted execution environment. ... So how do you build and use VBS Enclaves? First, you’ll need Windows 11 or Windows Server 2019 or later, with VBS enabled. You can do this from the Windows security tool, via a Group Policy, or with Intune to control it via MDM. It’s part of the Memory Integrity service, so you should really be enabling it on all supported devices to help reduce security risks, even if you don’t plan to use VBS Enclaves in your code. The best way to think of it is as a way of using encrypted storage securely. So, for example, if you’re using a database to store sensitive data, you can use code running in an enclave to process and query that data, passing results to the rest of your application. You’re encapsulating data in a secure environment with only essential access allowed. No other parts of your system have access to the decryption keys, so on-disk data stays secure.


Smart(er) Subsea Cables to Provide Early Warning System

With the U.N. estimating between 150 to 200 cable faults annually, operators need all the help they can get to maintain the global fiber network, which carries about 99% of internet traffic between continents. Additionally, $10 trillion of financial transactions flow over them per day. This growing situation has businesses desperately seeking network resiliency and clamoring for always-on-network services as their data centers and apps demand maximum uptime. The system has been beset this year with large cable outages starting in February in the Red Sea and in the spring along Western Africa, and more. ... Equipping the cable with sensors would enhance research into one of the most under-explored regions of the planet: the vast depths of the Southern Ocean, the study read. The Southern Ocean that surrounds Antarctica strongly influences other oceans and climates worldwide, according to the NSF. “Equipping the subsea telecommunications cable with sensors would help researchers better understand how deep-sea currents contribute to global climate change and improve understanding of earthquake seismology and related early warning signs for tsunamis in the earthquake-prone South Pacific region.”


Security Needs to Be Simple and Secure By Default: Google

"Google engineers are working to secure AI and to bring AI to security practitioners," said Steph Hay, senior director of Gemini + UX for cloud security at Google. "Gen AI represents the inflection point of security. It is going to transform security workflows and give the defender the advantage." ... Google also advocates for the convergence of security products and embedding AI into the entire security ecosystem. Through Mandiant, VirusTotal and the Google Cloud Platform, Google aims to drive this convergence, along with safe browsing. Google is making this convergence possible by taking a platform-centric approach through its Security Command Center, or SCC. Hemrajani shared that SCC aims to unify security categories such as cloud security posture management, Kubernetes security posture management, entitlement management and threat intelligence. Security information and event management and security orchestration, automation and response also need to converge. "SCC is bringing all of these together to be able to model the risk that you are exposed to in a holistic manner," he said. "We also realize that there is a power of convergence between cloud risk management and security operations. We need to converge them even further and bring them together to truly benefit."


The AI Revolution: How Machine Learning Changed the World in Two Years

The future of AI in business will involve continued collaboration between governments, businesses, and individuals to address challenges and maximize opportunities presented by this transformative technology. AI is likely to become increasingly integrated into software and hardware, making it easier for businesses to adopt and utilize its capabilities. Success will depend on how it is leveraged to augment human capabilities rather than replacing them, creating a future where humans and AI work together in a complementary way. Beyond automating individual tasks, AI is driving a paradigm shift towards unprecedented efficiency across entire business operations. By automating repetitive tasks, AI allows employees to focus on more strategic and creative work, leading to increased productivity and innovation. A recent McKinsey study found AI could potentially automate 45% of the activities currently performed by workers. As well as automating processes, it can also streamline operations, and minimize errors, leading to significant cost savings for businesses. For example, automating customer service with AI can reduce the need for human agents, leading to lower labor costs.



Quote for the day:

"Intelligence is the ability to change your mind when presented with accurate information that contradicts your beliefs" -- Vala Afshar

Daily Tech Digest - October 03, 2024

Why Staging Is a Bottleneck for Microservice Testing

Multiple teams often wait for their turn to test features in staging. This creates bottlenecks. The pressure on teams to share resources can severely delay releases, as they fight for access to the staging environment. Developers who attempt to spin up the entire stack on their local machines for testing run into similar issues. As distributed systems engineer Cindy Sridharan notes, “I now believe trying to spin up the full stack on developer laptops is fundamentally the wrong mindset to begin with, be it at startups or at bigger companies.” The complexities of microservices make it impractical to replicate entire environments locally, just as it’s difficult to maintain shared staging environments at scale. ... From a release process perspective, the delays caused by a fragile staging environment lead to slower shipping of features and patches. When teams spend more time fixing staging issues than building new features, product development slows down. In fast-moving industries, this can be a major competitive disadvantage. If your release process is painful, you ship less often, and the cost of mistakes in production is higher. 


Misconfiguration Madness: Thwarting Common Vulnerabilities in the Financial Sector

Financial institutions require legions of skilled security personnel in order to overcome the many challenges facing their industry. Developers are an especially important part of that elite cadre of defenders for a variety of reasons. First and foremost, security-aware developers can write secure code for new applications, which can thwart attackers by denying them a foothold in the first place. If there are no vulnerabilities to exploit, an attacker won't be able to operate, at least not very easily. Developers with the right training can also help to support both modern and legacy applications by examining the existing code that makes up some of the primary vectors used to attack financial institutions. That includes cloud misconfigurations, lax API security, and the many legacy bugs found in applications written in COBOL and other aging computer languages. However, the task of nurturing and maintaining security-aware developers in the financial sector won’t happen on its own. It requires precise, immersive training programs that are highly customizable and matched to the specific complex environment that a financial services institution is using.


3 things to get right with data management for gen AI projects

The first is a series of processes — collecting, filtering, and categorizing data — that may take several months for KM or RAG models. Structured data is relatively easy, but the unstructured data, while much more difficult to categorize, is the most valuable. “You need to know what the data is, because it’s only after you define it and put it in a taxonomy that you can do anything with it,” says Shannon. ...  “We started with generic AI usage guidelines, just to make sure we had some guardrails around our experiments,” she says. “We’ve been doing data governance for a long time, but when you start talking about automated data pipelines, it quickly becomes clear you need to rethink the older models of data governance that were built more around structured data.” Compliance is another important area of focus. As a global enterprise thinking about scaling some of their AI projects, Harvard keeps an eye on evolving regulatory environments in different parts of the world. It has an active working group dedicated to following and understanding the EU AI Act, and before their use cases go into production, they run through a process to make sure all compliance obligations are satisfied.


Fundamentals of Data Preparation

Data preparation is intended to improve the quality of the information that ML and other information systems use as the foundation of their analyses and predictions. Higher-quality data leads to greater accuracy in the analyses the systems generate in support of business decision-makers. This is the textbook explanation of the link between data preparation and business outcomes, but in practice, the connection is less linear. ... Careful data preparation adds value to the data itself, as well as to the information systems that rely on the data. It goes beyond checking for accuracy and relevance and removing errors and extraneous elements. The data-prep stage gives organizations the opportunity to supplement the information by adding geolocation, sentiment analysis, topic modeling, and other aspects. Building an effective data preparation pipeline begins long before any data has been collected. As with most projects, the preparation starts at the end: identifying the organization’s goals and objectives, and determining the data and tools required to achieve those goals. ... Appropriate data preparation is the key to the successful development and implementation of AI systems in large part because AI amplifies existing data quality problems. 


How to Rein in Cybersecurity Tool Sprawl

Security tool sprawl happens for many different reasons. Adding new tools and new vendors as new problems arise without evaluating the tools already in place is often how sprawl starts. The sheer glut of tools available in the market can make it easy for security teams to embrace the latest and greatest solutions. “[CISOs] look for the newest, the latest and the greatest. They're the first adopter type,” says Reiter. A lack of communication between departments and teams in an enterprise can also contribute. “There's the challenge of teams not necessarily knowing their day-to-day functions of other team,” says Mar-Tang. Security leaders can start to wrap their heads around the problem of sprawl by running an audit of the security tools in place. Which teams use which tools? How often are the tools used? How many vendors supply those tools? What are the lengths of the vendor contracts? Breaking down communication barriers within an enterprise will be a necessary part of answering questions like these. “Talk to the … security and IT risk side of your house, the people who clean up the mess. You have an advocate and a partner to be able to find out where you have holes and where you have sprawl,” Kris Bondi, CEO and co-founder at endpoint security company Mimoto, recommends.


The Promise and Perils of Generative AI in Software Testing

The journey from human automation tester to AI test automation engineer is transformative. Traditionally, transitioning to test automation required significant time and resources, including learning to code and understanding automation frameworks. AI removes these barriers and accelerates development cycles, dramatically reducing time-to-market and improving accuracy, all while decreasing the level of admin tasks for software testers. AI-powered tools can interpret test scenarios written in plain language, automatically generate the necessary code for test automation, and execute tests across various platforms and languages. This dramatically reduces the enablement time, allowing QA professionals to focus on strategic tasks instead of coding complexities. ... As GenAI becomes increasingly integrated into software development life cycles, understanding its capabilities and limitations is paramount. By effectively managing these dynamics, development teams can leverage GenAI’s potential to enhance their testing practices while ensuring the integrity of their software products.


Near-'perfctl' Fileless Malware Targets Millions of Linux Servers

The malware looks for vulnerabilities and misconfigurations to exploit in order to gain initial access. To date, Aqua Nautilus reports, the malware has likely targeted millions of Linux servers, and compromised thousands. Any Linux server connected to the Internet is in its sights, so any server that hasn't already encountered perfctl is at risk. ... By tracking its infections, researchers identified three Web servers belonging to the threat actor: two that were previously compromised in prior attacks, and a third likely set up and owned by the threat actor. One of the compromised servers was used as the primary base for malware deployment. ... To further hide its presence and malicious activities from security software and researcher scrutiny, it deploys a few Linux utilities repurposed into user-level rootkits, as well as one kernel-level rootkit. The kernel rootkit is especially powerful, hooking into various system functions to modify their functionality, effectively manipulating network traffic, undermining Pluggable Authentication Modules (PAM), establishing persistence even after primary payloads are detected and removed, or stealthily exfiltrating data. 


Three hard truths hindering cloud-native detection and response

Most SOC teams either lack the proper tooling or have so many cloud security point tools that the management burden is untenable. Cloud attacks happen way too fast for SOC teams to flip from one dashboard to another to determine if an application anomaly has implications at the infrastructure level. Given the interconnectedness of cloud environments and the accelerated pace at which cloud attacks unfold, if SOC teams can’t see everything in one place, they’ll never be able to connect the dots in time to respond. More importantly, because everything in the cloud happens at warp speed, we humans need to act faster, which can be nerve wracking and increase the chance of accidentally breaking something. While the latter is a legitimate concern, if we want to stay ahead of our adversaries, we need to get comfortable with the accelerated pace of the cloud. While there are no quick fixes to these problems, the situation is far from hopeless. Cloud security teams are getting smarter and more experienced, and cloud security toolsets are maturing in lockstep with cloud adoption. And I, like many in the security community, am optimistic that AI can help deal with some of these challenges.


How to Fight ‘Technostress’ at Work

Digital stressors don’t occur in isolation, according to the researchers, which necessitates a multifaceted approach. “To address the problem, you can’t just address the overload and invasion,” Thatcher said. “You have to be more strategic.” “Let’s say I’m a manager, and I implement a policy that says no email on weekends because everybody’s stressed out,” Thatcher said. “But everyone stays stressed out. That’s because I may have gotten rid of techno-invasion—that feeling that work is intruding on my life—but on Monday, when I open my email, I still feel really overloaded because there are 400 emails.” It’s crucial for managers to assess the various digital stressors affecting their employees and then target them as a combination, according to the researchers. That means to address the above problem, Thatcher said, “you can’t just address invasion. You can’t just address overload. You have to address them together,” he said. ... Another tool for managers is empowering employees, according to the study. “As a manager, it may feel really dangerous to say, ‘You can structure when and where and how you do work.’ 


Fix for BGP routing insecurity ‘plagued by software vulnerabilities’ of its own, researchers find

Under BGP, there is no way to authenticate routing changes. The arrival of RPIK just over a decade ago was intended to fix that, using a digital record called a Route Origin Authorization (ROA) that identifies an ISP as having authority over specific IP infrastructure. Route origin validation (ROV) is the process a router undergoes to check that an advertised route is authorized by the correct ROA certificate. In principle, this makes it impossible for a rogue router to maliciously claim a route it does not have any right to. RPKI is the public key infrastructure that glues this all together, security-wise. The catch is that, for this system to work, RPIK needs a lot more ISPs to adopt it, something which until recently has happened only very slowly. ... “Since all popular RPKI software implementations are open source and accept code contributions by the community, the threat of intentional backdoors is substantial in the context of RPKI,” they explained. A software supply chain that creates such vital software enabling internet routing should be subject to a greater degree of testing and validation, they argue.



Quote for the day:

"You may have to fight a battle more than once to win it." -- Margaret Thatcher

Daily Tech Digest - October 02, 2024

Breaking through AI data bottlenecks

One of the most significant bottlenecks in training specialized AI models is the scarcity of high-quality, domain-specific data. Building enterprise-grade AI requires increasing amounts of diverse, highly contextualized data, of which there are limited supplies. This scarcity, sometimes known as the “cold start” problem, is only growing as companies license their data and further segment the internet. For startups and leading AI teams building state-of-the-art generative AI products for specialized use cases, public data sets also offer capped value, due to their lack of specificity and timeliness. ... Synthesizing data not only increases the volume of training data but also enhances its diversity and relevance to specific problems. For instance, financial services companies are already using synthetic data to rapidly augment and diversify real-world training sets for more robust fraud detection — an effort that is supported by financial regulators like the UK’s Financial Conduct Authority. By using synthetic data, these companies can generate simulations of never-before-seen scenarios and gain safe access to proprietary data via digital sandboxes.


Five Common Misconceptions About Event-Driven Architecture

Event sourcing is an approach to persisting data within a service. Instead of writing the current state to the database, and updating that stored data when the state changes, you store an event for every state change. The state can then be restored by replaying the events. Event-driven architecture is about communication between services. A service publishes any changes in its subdomain it deems potentially interesting for others, and other services subscribe to these updates. These events are carriers of state and triggers of actions on the subscriber side. While these two patterns complement each other well, you can have either without the other. ... Just as you can use Kafka without being event-driven, you can build an event-driven architecture without Kafka. And I’m not only talking about “Kafka replacements”, i.e. other log-based message brokers. I don’t know why you’d want to, but you could use a store-and-forward message queue (like ActiveMQ or RabbitMQ) for your eventing. You could even do it without any messaging infrastructure at all, e.g. by implementing HTTP feeds. Just because you could, doesn’t mean you should! A log-based message broker is most likely the best approach for you, too, if you want an event-driven architecture.


Mostly AI’s synthetic text tool can unlock enterprise emails and conversations for AI training

Mostly AI provides enterprises with a platform to train their own AI generators that can produce synthetic data on the fly. The company started off by enabling the generation of structured tabular datasets, capturing nuances of transaction records, patient journeys and customer relationship management (CRM) databases. Now, as the next step, it is expanding to text data. While proprietary text datasets – like emails, chatbot conversations and support transcriptions – are collected on a large scale, they are difficult to use because of the inclusion of PII (like customer information), diversity gaps and structured data to some level. With the new synthetic text functionality on the Mostly AI platform, users can train an AI generator using any proprietary text they have and then deploy it to produce a cleansed synthetic version of the original data, free from PII or diversity gaps. ... The new feature, and its ability to unlock value from proprietary text without privacy concerns, makes it a lucrative offering for enterprises looking to strengthen their AI training efforts. The company claims training a text classifier on its platform’s synthetic text resulted in 35% performance enhancement as compared to data generated by prompting GPT-4o-mini.


Not Maintaining Data Quality Today Would Mean Garbage In, Disasters Out

Enterprises are increasingly data-driven and rely heavily on the collected data to make decisions, says Choudhary. Also, a decade ago, a single application stored all its data in a relational database for weekly reporting. Today, data is scattered across various sources including relational databases, third-party data stores, cloud environments, on-premise systems, and hybrid models, says Choudhary. This shift has made data management much more complex, as all of these sources need to be harmonized in one place. However, in the world of AI, both structured and unstructured data need to be of high quality. Choudhary states that not maintaining data quality in the AI age would lead to garbage in, disasters out. Highlighting the relationship between AI and data observability in enterprise settings, he says that given the role of both structured and unstructured data in enterprises, data observability will become more critical. ... However, AI also requires the unstructured business context, such as documents from wikis, emails, design documents, and business requirement documents (BRDs). He stresses that this unstructured data adds context to the factual information on which business models are built.


Three Evolving Cybersecurity Attack Strategies in OT Environments

Attackers are increasingly targeting supply chains, capitalizing on the trust between vendors and users to breach OT systems. This method offers a high return on investment, as compromising a single supplier can result in widespread breaches. The Dragonfly attacks, where attackers penetrated hundreds of OT systems by replacing legitimate software with Trojanized versions, exemplify this threat. ... Attack strategies are shifting from immediate exploitation to establishing persistent footholds within OT environments. Attackers now prefer to lie dormant, waiting for an opportune moment to strike, such as during economic instability or geopolitical events. This approach allows them to exploit unknown or unpatched vulnerabilities, as demonstrated by the Log4j and Pipedream attacks. ... Attackers are increasingly focused on collecting and storing encrypted data from OT environments for future exploitation, particularly with the impending advent of post-quantum computing. This poses a significant risk to current encryption methods, potentially allowing attackers to decrypt previously secure data. Manufacturers must implement additional protective layers and consider future-proofing their encryption strategies to safeguard data against these emerging threats.


Mitigating Cybersecurity Risk in Open-Source Software

Unsurprisingly, open-source software's lineage is complex. Whereas commercial software is typically designed, built and supported by one corporate entity, open-source code could be written by a developer, a well-resourced open-sourced community or a teenage whiz kid. Libraries containing all of this open-source code, procedures and scripts are extensive. They can contain libraries within libraries, each with its own family tree. A single open-source project may have thousands of lines of code from hundreds of authors which can make line-by-line code analysis impractical and may result in vulnerabilities slipping through the cracks. These challenges are further exacerbated by the fact that many libraries are stored on public repositories such as GitHub, which may be compromised by bad actors injecting malicious code into a component. Vulnerabilities can also be accidentally introduced by developers. Synopsys' OSSRA report found that 74% of the audited code bases had high-risk vulnerabilities. And don't forget patching, updates and security notifications that are standard practices from commercial suppliers but likely lacking (or far slower) in the world of open-source software. 


Will AI Middle Managers Be the Next Big Disruption?

Trust remains a critical barrier, with many companies double-checking AI outputs, especially in sensitive areas such as compliance. But as the use of explainable AI grows, offering transparent decision-making, companies may begin to relax their guard and fully integrate AI as a trusted part of the workforce. But despite its vast potential and transformative abilities, autonomous AI is unlikely to work without human supervision. AI lacks the emotional intelligence needed to navigate complex human relationships, and companies are often skeptical of assigning decision-making to AI tools. ... "One thing that won't change is that work is still centered around humans, so that people can bring their creativity, which is such an important human trait," said Fiona Cicconi, chief people officer, Google. Accenture's report highlights just that. Technology alone will not drive AI-driven growth. ... Having said that, managers will have to roll up their sleeves, upskill and adapt to AI and emerging technologies that benefit their teams and align with organizational objectives. To fully realize the potential of AI, businesses will need to prioritize human-AI collaboration.


Managing Risk: Is Your Data Center Insurance up to the Test?

E&O policies generally protect against liability to third parties for losses arising from the insured’s errors and omissions in performing “professional services.” ... Cyber coverage typically protects against a broad range of first-party losses and liability claims arising from various causes, including data breaches and other disclosures of non-public information. A data center that processes data owned by third parties plainly has liability exposure to such parties if their non-public information is disclosed as a result of the data center’s operations. But even if a data center is processing only its own company’s data, it still has liability exposure, including for disclosure of non-public information belonging to its customers and employees. Given the often-substantial costs of defending data breach claims, data center operators would be well-advised to (1) review their cyber policies carefully for exclusions or limitations that potentially could apply to their liability coverage under circumstances particular to their operations and (2) purchase cyber liability limits commensurate with the amount and sensitivity of non-public data in their possession.


Attribution as the foundation of developer trust

With the need for more trust in AI-generated content, it is critical to credit the author/subject matter expert and the larger community who created and curated the content shared by an LLM. This also ensures LLMs use the most relevant and up-to-date information and content, ultimately presenting the Rosetta Stone needed by a model to build trust in sources and resulting decisions. All of our OverflowAPI partners have enabled attribution through retrieval augmented generation (RAG). For those who may not be familiar with it, retrieval augmented generation is an AI framework that combines generative large language models (LLMs) with traditional information retrieval systems to update answers with the latest knowledge in real time (without requiring re-training models). This is because generative AI technologies are powerful but limited by what they “know” or “the data they have been trained on.” RAG helps solve this by pairing information retrieval with carefully designed system prompts that enable LLMs to provide relevant, contextual, and up-to-date information from an external source. In instances involving domain-specific knowledge, RAG can drastically improve the accuracy of an LLM's responses.


Measurement Challenges in AI Catastrophic Risk Governance and Safety Frameworks

The current definition of catastrophic events, focusing on "large-scale devastation... directly caused by an AI model," overlooks a critical aspect: indirect causation and salient contributing causes. Indirect causation refers to cases where AI plays a pivotal but not immediately apparent role. For instance, the development and deployment of advanced AI models could trigger an international AI arms race, becoming a salient contributor to increased geopolitical instability or conflict. A concrete example might be AI-enhanced cyber warfare capabilities leading to critical infrastructure failures across multiple countries. AI systems might also amplify existing systemic risks or introduce new vulnerabilities that become salient contributing causes to a catastrophic event. The current narrow scope of AI catastrophic events may lead to underestimating the full range of potential catastrophic outcomes associated with advanced AI models, particularly those arising from complex interactions between AI and other sociotechnical systems. This could include scenarios where AI exacerbates climate change through increased energy consumption or where AI-powered misinformation campaigns gradually lead to the breakdown of trust in democratic institutions and social order.



Quote for the day:

"Facing difficult circumstances does not determine who you are. They simply bring to light who you already were." -- Chris Rollins

Daily Tech Digest - October 01, 2024

9 types of phishing attacks and how to identify them

Different victims, different paydays. A phishing attack specifically targeting an enterprise’s top executives is called whaling, as the victim is considered to be high-value, and the stolen information will be more valuable than what a regular employee may offer. The account credentials belonging to a CEO will open more doors than an entry-level employee. The goal is to steal data, employee information, and cash. ... Clone phishing requires the attacker to create a nearly identical replica of a legitimate message to trick the victim into thinking it is real. The email is sent from an address resembling the legitimate sender, and the body of the message looks the same as a previous message. The only difference is that the attachment or the link in the message has been swapped out with a malicious one. ... Snowshoeing, or “hit-and-run” spam, requires attackers to push out messages via multiple domains and IP addresses. Each IP address sends out a low volume of messages, so reputation- or volume-based spam filtering technologies can’t recognize and block malicious messages right away. Some of the messages make it to the email inboxes before the filters learn to block them.


The End Of The SaaS Era: Rethinking Software’s Role In Business

While the traditional SaaS model may be losing its luster, software itself remains a critical component of modern business operations. The key shift is in how companies think about and utilize software. Rather than viewing it as a standalone business model, forward-thinking entrepreneurs and executives are beginning to see software as a powerful tool for creating value in other business contexts. ... Consider a hypothetical scenario where a tech company develops an AI-powered inventory management system that dramatically improves efficiency for retail businesses. Instead of simply selling this system as a SaaS product, the company could use it as leverage to acquire successful retail operations. By implementing their proprietary software, they could significantly boost the profitability of these businesses, creating value far beyond what they might have captured through traditional software licensing. ... Proponents of this new approach argue that while others will eventually catch up in terms of software capabilities, the first-movers will have already used their technological edge to acquire valuable real-world assets. 


How Agentless Security Can Prevent Major Ops Outages

An agentless security model is a modern way to secure cloud environments without installing agents on each workload. It uses cloud providers’ native tools and APIs to monitor and protect assets like virtual machines, containers and serverless functions. Here’s how it works: Data is collected through API calls, providing real-time insights into vulnerabilities. A secure proxy ensures seamless communication without affecting performance. This model continuously scans workloads, offering 100% visibility and detecting issues without disruption. ... Instead of picking between agent-based and agentless security, you can use both together. Agent-based security works best for stable, less-changing systems. It offers deep, ongoing monitoring when things stay the same. On the other hand, agentless security is great for fast-paced cloud setups where new workloads come and go often. It gives real-time insights without needing to install anything, making it flexible for larger cloud systems. A hybrid approach gives you stronger protection and keeps up with changing threats, making sure your defenses are ready for whatever comes next.


The inner workings of a Conversational AI

The initial stage of interaction between a user and an AI system involves input processing. When a user submits a prompt, the system undergoes a series of preprocessing steps to transform raw text into a structured format suitable for machine comprehension. Natural Language Processing (NLP) techniques are employed to break down the text into individual words or tokens, a process known as tokenization. ... Once the system has a firm grasp of the user’s intent through input processing, it embarks on the crucial phase of knowledge retrieval. This involves sifting through vast repositories of information to extract relevant data. Traditional information retrieval techniques like BM25 or TF-IDF are employed to match the processed query with indexed documents. An inverted index, a data structure mapping words to the documents containing them, accelerates this search process. ... With relevant information gathered, the system transitions to the final phase: response generation. This involves constructing a coherent and informative text that directly addresses the user’s query. Natural Language Generation (NLG) techniques are employed to transform structured data into human-readable language.


Can We Ever Trust AI Agents?

The consequences of misplaced trust in AI agents could be dire. Imagine an AI-powered financial advisor that inadvertently crashes markets due to a misinterpreted data point, or a healthcare AI that recommends incorrect treatments based on biased training data. The potential for harm is not limited to individual sectors; as AI agents become more integrated into our daily lives, their influence grows exponentially. A misstep could ripple through society, affecting everything from personal privacy to global economics. At the heart of this trust deficit lies a fundamental issue: centralization. The development and deployment of AI models have largely been the purview of a handful of tech giants. ... The tools for building trust in AI agents already exist. Blockchains can enable verifiable computation, ensuring that AI actions are auditable and traceable. Every decision an AI agent makes could be recorded on a public ledger, allowing for unprecedented transparency. Concurrently, advanced cryptographic techniques like trusted execution environment machine learning (TeeML) can protect sensitive data and maintain model integrity, achieving both transparency and privacy.


Reducing credential complexity with identity federation

One potential challenge organizations may encounter when implementing federated identity management in cross-organization collaborations is ensuring a seamless trust relationship between multiple identity providers and service providers. If the trust isn’t well established or managed, it can lead to security vulnerabilities or authentication issues. Additionally, the complexity of managing multiple identity providers can become problematic if there is a need to merge user identities across systems. For example, ensuring that all identity providers fulfill their roles without conflicting or creating duplicate identities can be challenging. Finally, while federated identity management improves convenience, it can come at the cost of time-consuming engineering and IT work to set up and maintain these IdP-SP connections. Traditional in-house implementation may also mean these connections are 1:1 and hard-coded, which will make ongoing modifications even tougher. Organizations need to balance the benefits of federated identity management against the time and cost investment needed, whether they do it in-house or with a third-party solution.


AI: Maximizing innovation for good

Businesses need to understand that AI technology will be here to stay. Strong AI strategies consider the purpose and objectives of considering AI, explaining the processes for businesses to prove value and absorb the rapid pace of change, considering the technology itself. Implementation needs to ensure that solutions mesh effectively with IT infrastructure that’s already in place. Digitalization, digital transformation, and upgrading legacy systems, as overarching initiatives, require planning and understanding of how they will impact wider business functions. That’s not to say it needs to be slow or cumbersome, however – one of the joys on AI is the ease with which it can put powerful new capabilities in the hands of teams. When due diligence is conducted effectively, AI integration can become the lynchpin to elevate business practices – boosting productivity, efficiency, and lowering costs. The opportunities for improvements cannot be understated, especially when looking at wider settings outside of just industrial or financial sectors. Ultimately, overreaching when implementing AI, can create a situation where integrated tools muddy the water and dilute the effectiveness of their intended use.


The Path of Least Resistance to Privileged Access Management

While PAM allows organizations to segment accounts, providing a barrier between the user’s standard access and needed privileged access and restricting access to information that is not needed, it also adds a layer of internal and organizational complexity. This is because of the impression it removes user’s access to files and accounts that they have typically had the right to use, and they do not always understand why. It can bring changes to their established processes. They don’t see the security benefit and often resist the approach, seeing it as an obstacle to doing their jobs and causing frustration amongst teams. As such, PAM is perceived to be difficult to introduce because of this friction. ... A significant gap in the PAM implementation process lies in the lack of comprehensive awareness among administrators. They often do not have a complete inventory of all accounts, the associated access levels, their purposes, ownership, or the extent of the security issues they face. Although PAM solutions possess the capability for scanning and discovering privileged accounts, these solutions are limited by the scope of the instructions they receive, thus providing only partial visibility into system access and usage.


Microsoft researchers propose framework for building data-augmented LLM applications

“Data augmented LLM applications is not a one-size-fits-all solution,” the researchers write. “The real-world demands, particularly in expert domains, are highly complex and can vary significantly in their relationship with given data and the reasoning difficulties they require.” To address this complexity, the researchers propose a four-level categorization of user queries based on the type of external data required and the cognitive processing involved in generating accurate and relevant responses: – Explicit facts: Queries that require retrieving explicitly stated facts from the data. – Implicit facts: Queries that require inferring information not explicitly stated in the data, often involving basic reasoning or common sense. – Interpretable rationales: Queries that require understanding and applying domain-specific rationales or rules that are explicitly provided in external resources. – Hidden rationales: Queries that require uncovering and leveraging implicit domain-specific reasoning methods or strategies that are not explicitly described in the data. Each level of query presents unique challenges and requires specific solutions to effectively address them.


Unleashing the Power Of Business Application Integration

In many cases, businesses are replacing their legacy software solutions with a modular selection of applications hosted within a public cloud environment. Given the increasing maturity of this market, there is now a range of application stores and marketplaces from the likes of AWS, Microsoft and Google. These have made it much easier for IT teams to identify, purchase and integrate proven applications as part of a bespoke, enterprise-wide ERP strategy. ... once IT teams have selected and integrated the right business applications within their environment, the next step is to focus on data strategy. The main objective here should be to ensure that data is of the highest quality and can be used to address a diverse range of key business objectives, from driving profit, efficiency and innovation to improving customer service. This process can be complex and challenging, but there are a number of steps organisations can take to fully exploit their data assets. These include optimising the performance and availability of an existing data environment and prioritising data systems migration.



Quote for the day:

"The first step toward success is taken when you refuse to be a captive of the environment in which you first find yourself." -- Mark Caine

Daily Tech Digest - September 30, 2024

What Will Be the Next Big Thing in AI?

The next big thing in AI will likely be advanced multimodal models that can seamlessly integrate and process different types of data, including text, images, audio, and video, in more human-like ways, says Dinesh Puppala, regulatory affairs lead at Google. "We're moving beyond models that specialize in one type of data toward AI systems that can understand and generate across multiple modalities simultaneously, much like humans do," he notes. Advanced multimodal models will enable more natural and context-aware human-AI interactions. "They'll be better at understanding nuanced queries, interpreting visual and auditory cues, and providing more holistic and relevant responses," Puppala predicts. ... Metacognition in AI -- systems that can think about the way they think -- is on the mind of Isak Nti Asare, co-director of the cybersecurity and global policy program at Indiana University. "This capability, often described as AI self-awareness, is a necessary frontier to cross if we are to build trustworthy systems that can explain their decisions." Current AI systems, while advanced, often operate as "black boxes" where even their creators cannot fully explain their outputs. 


Best Practices for Sunsetting Mainframe Applications

The first crucial step in migrating from a mainframe to the cloud is the discovery phase. During this phase, organizations must conduct a thorough assessment of their current mainframe environment, including architecture, applications, data, dependencies, and workflows. This comprehensive understanding helps in identifying potential risks and planning the migration process effectively. The insights gained are crucial for setting the stage for the subsequent cost-benefit analysis (CBA), ensuring all stakeholders are on board with the proposed changes. A detailed CBA is essential to evaluate the financial feasibility and potential returns of the migration project. This analysis should account for all costs associated with the migration, including software licensing, cloud storage fees, and ongoing maintenance costs. It should also highlight the benefits, such as improved operational efficiency and reduced downtime, which are crucial for gaining stakeholder support. ... Effective risk management is crucial for a successful migration. This involves ensuring the availability of subject matter experts, comprehensive planning, and addressing potential issues with legacy systems. 


Security spending signals major role change for CISOs and their teams

“Expected to do more with less,” CISOs are shifting their focus, Kalinov adds. “Instead of beefing up their internal teams, they’re focusing on risk management, regulatory compliance, and keeping C-suite executives aware of the evolving security landscape,” Kalinov says. James Neilson, SVP of international sales at cybersecurity vendor OPSWAT, believes the increasing allocation of security budgets toward software and services rather than staff reflects the CISO’s evolving role from managing internal teams toward becoming a more strategic, technology-driven leader. “This trend also indicates that they’re taking on a more prominent role in risk management, ensuring that outsourced services complement internal capabilities while maintaining agility in response to evolving threats,” Neilson says. As a result, security organizations are also undergoing a shift from traditionally siloed, in-house approaches toward a more integrated, outsourced, and technology-driven model, Neilson argues. ... “Organizations increasingly rely on elements of external managed services and advanced automation tools to manage cybersecurity, focusing internal resources on understanding the business and its risks, defining higher-level strategy, oversight, and risk management,” Neilson contends.


Shadow AI, Data Exposure Plague Workplace Chatbot Use

The issue is that most of the most prevalent chatbots capture whatever information users put into prompts, which could be things like proprietary earnings data, top-secret design plans, sensitive emails, customer data, and more — and send it back to the large language models (LLMs), where it's used to train the next generation of GenAI. ... ChatGPT’s creator, OpenAI, warns in its user guide, "We are not able to delete specific prompts from your history. Please don't share any sensitive information in your conversations." But it's hard for the average worker to constantly be thinking about data exposure. Lisa Plaggemier, executive director of NCA, notes one case that illustrates how the risk can easily translate into real-world attacks. "A financial services firm integrated a GenAI chatbot to assist with customer inquiries," Plaggemier tells Dark Reading. "Employees inadvertently input client financial information for context, which the chatbot then stored in an unsecured manner. This not only led to a significant data breach, but also enabled attackers to access sensitive client information, demonstrating how easily confidential data can be compromised through the improper use of these tools."


Can AWS hit its sustainability targets and win the AI race?

“Once we achieve that goal, we're looking at what's next beyond that?,” Walker says. “As you look beyond just wind and solar, we need to look at what else is in our tool belt, especially looking further ahead to 2040 and how we're going to reach those ultimate goals, and carbon-free energy sources are the next evolution of that.” When asked whether carbon-free energy to the company means nuclear, geothermal, or something else, Walker says the company is open. “We're not limiting the options; we're looking beyond the traditional renewable sources and seeing what else there is. Carbon-free energy sources are going to be one of the tools that we're going to double down on and start looking at.” ... When asked if AWS will look to acquire more data centers close to nuclear plants or merely sign more PPAs that involve nuclear power, Walker says the company is looking at “all of the above.” “We haven't limited our options in terms of capacity. Depending on where we're building and at the rate we need to scale, [it's] certainly going to be part of the conversation.” Longer term, fusion energy could perhaps power the company’s data centers. Microsoft and OpenAI have invested in Helion, which is promising to crack the elusive technology before 2030. Google has invested in Tae Technologies.


6 ways to apply automation in devsecops

Securing continuous development processes is an extension of collaboration security. In most organizations today, multiple individuals on multiple teams write code every day — fixing bugs, adding new features, improving performance, etc. Consider an enterprise with three different teams contributing to the application code. Each is responsible for its own area. Once Team 1 checks in updated code, the build manager needs to ensure that this new code is compatible with code already contributed by Teams 2 and 3. The build manager creates a new build and scans it to ensure there are no vulnerabilities. With so much code being contributed, automation is critical. Only by automating the build creation, compatibility, and approval cycle can a business ensure that each step is always taken and done in a consistent manner. ... For larger enterprises, which may have thousands of developers checking in code daily, automation is a matter of survival. Even smaller companies must begin putting automated processes in place if they want to keep their developers productive while ensuring the security of their code.


AI, AI Everywhere! But are we Truly Prepared?

AI models are reflections of the massive database they feed on and the entire internet is on its plate. Every time a user runs a query or prompts the model for a certain search requirement, the AI runs it through the maximum accessible data in its capacity, figures out relevant touchpoints, and frames the responses as demanded by the user using its intelligent capabilities. However, not surprising that the ever-learning and self-evolving capabilities of AI models suck in more power than its search and response process. The volume of users due to the soaring popularity of the technology further adds to the power consumption. ... The exercise lights up a directional path for the artificially intelligent technology to learn and evolve in accordance. This entire process of training an AI model can range anywhere from a few minutes to several months. And, throughout the process, GPUs powering the machines keep running daylong eating into large volumes of power. On the bright side, experts have pointed out that specialised AI models are significantly more efficient in power consumption than generic models. 


Ransomware attackers hop from on-premises systems to cloud

“Storm-0501 is the latest threat actor observed to exploit weak credentials and over-privileged accounts to move from organizations’ on-premises environment to cloud environments. They stole credentials and used them to gain control of the network, eventually creating persistent backdoor access to the cloud environment and deploying ransomware to the on-premises,” Microsoft shared last week. ... “Microsoft Entra Connect Sync is a component of Microsoft Entra Connect that synchronizes identity data between on-premises environments and Microsoft Entra ID,” Microsoft explained. “We can assess with high confidence that in the recent Storm-0501 campaign, the threat actor specifically located Microsoft Entra Connect Sync servers and managed to extract the plain text credentials of the Microsoft Entra Connect cloud and on-premises sync accounts. The compromise of the Microsoft Entra Connect Sync account presents a high risk to the target, as it can allow the threat actor to set or change Microsoft Entra ID passwords of any hybrid account.” The second approach – hijacking a Domain Admin user account that has a respective user account in Microsoft Entra ID – is also possible.


How AI is transforming business today

“We’re seeing lots of efficiencies where back, middle, and front-end workflows are being automated. So, yes, you can automate your existing processes, and that’s good and you can get a 20% [improvement in efficiency]. But the real gain is to reimagine the process itself,” she says. In fact, the gains AI can bring when used to reimagine processes is so significant that she says AI challenges the very concept of “process” itself. That’s because organizations can use AI to devise ways to reach specific desired outcomes without having a bias toward keeping and improving existing workflows. “Say you want to increase customer satisfaction by 35%. That’s the input. It’s less about how the process works. The process itself becomes almost irrelevant,” she explains. “The technology is good at achieving an object, a goal, and the concept of process itself, the sequence itself, is blown away. That is conceptually a big shift when you think of the enterprise, which is built on three things: people, process, and technology, and here’s a technology — AI — that doesn’t care about a process but is instead focused on outcome. That is truly disruptive.”


California Gov. Newsom Vetoes Hotly Debated AI Safety Bill

Newsom said he had asked generative AI experts, including Dr. Li, Tino Cuéllar of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research, and Jennifer Tour Chayes from the College of Computing, Data Science, and Society at UC Berkeley, to help California develop "workable guardrails" that focused on "developing an empirical, science-based trajectory analysis." He also asked state agencies to expand their assessment of AI risks from potential catastrophic events related to AI use. "We have a responsibility to protect Californians from potentially catastrophic risks of GenAI deployment. We will thoughtfully - and swiftly - work toward a solution that is adaptable to this fast-moving technology and harnesses its potential to advance the public good," he said. Among the AI bills Newsom has signed are SB 896, which requires California's Office of Emergency Services to expand its work assessing AI's potential threat to critical infrastructure. The governor also directed the agency to undertake the same risk assessment with water infrastructure providers and the communications sector.



Quote for the day:

"Have the dogged determination to follow through to achieve your goal, regardless of circumstances or whatever other people say, think, or do." -- Paul Meyer

Daily Tech Digest - September 29, 2024

Updating Enterprise Technology to Scale to ‘AI Everywhere’

Operational systems with significant unstructured data will face substantial re-architecting due to generative AI’s ability to make use of previously underutilized data sources. In our experience, the most common solution patterns for generative AI use cases in operational systems fall within the areas of content generation, knowledge management, and reporting and documentation ... As generative AI model use cases get deployed across critical systems and complexity increases, it will put further demands on collaboration, quality control, reliability, and scalability. AI models will need to be treated with the same discipline as software code by adopting MLOps processes that use DevOps to manage models through their life cycle. Companies should set up a federated AI development model in line with the AIaaS platform. This should define the roles of teams that produce and consume AI services, as well as the processes for federated contribution and how datasets and models are to be shared. Given the pace of evolution of generative AI, it is also imperative to create AI-first software development processes that allow for rapid iteration of new solutions and architectures. 


EPSS vs. CVSS: What's the Best Approach to Vulnerability Prioritization?

EPSS is a model that provides a daily estimate of the probability that a vulnerability will be exploited in the wild within the next 30 days. The model produces a score between 0 and 1 (0 and 100%), with higher scores indicating a higher probability of exploitation. The model works by collecting a wide range of vulnerability information from various sources, such as the National Vulnerability Database (NVD), CISA KEV, and Exploit-DB, along with evidence of exploitation activity. ... By considering EPSS when prioritizing vulnerabilities, organizations can better align their remediation efforts with the actual threat landscape. For example, if EPSS indicates a high probability of exploitation for a vulnerability with a relatively low CVSS score, security teams might consider prioritizing that vulnerability over others that may have higher CVSS scores but a lower likelihood of exploitability. ... Intruder is a cloud-based security platform that helps businesses manage their attack surface and identify vulnerabilities before they can be exploited. By offering continuous security monitoring, attack surface management, and intelligent threat prioritization, Intruder allows teams to focus on the most critical risks while simplifying cybersecurity.


How To Embrace The Enterprise AI Era

As enterprises rush to adopt AI technologies, there's a growing concern about the responsible use of these powerful tools. Ramaswamy stresses the importance of a thoughtful approach to AI implementation: "We mandated very early that any models that we train needed obviously to only take data that we had free use rights on, but we said they also need to have model cards so that if there is a problem with the data source, you can go back, retrain a model without the data source." ... Developing a robust data strategy is essential for AI success. Organizations need a clear plan for managing, sharing, and leveraging data across the enterprise. This includes establishing data governance policies, ensuring data quality and consistency, and creating a unified data architecture that supports AI initiatives. A well-designed data strategy enables companies to break down silos, improve data accessibility, and create a solid foundation for AI-driven insights and decision-making. Embracing interoperability is another critical aspect of preparing for the enterprise AI era. Companies should look for solutions that support open data formats and easy integration with other tools and platforms. 


The Hidden Language of Data: How Linguistic Analysis Is Transforming Data Interpretation

Unlike conventional methods that focus on structured data, linguistic analysis delves into the complexities of human communication. It examines patterns, context, and meaning in text data, allowing us to extract trends and insights from sources like social media posts, customer reviews, and open-ended survey responses. Linguistic analysis in data science marries principles from the two fields. From linguistics, we borrow concepts like syntax (sentence structure), semantics (meaning), and pragmatics (context). These help us understand not just what words say, but how they’re used and what they imply. On the data science side, we leverage technologies like machine learning and natural language processing (NLP). These technologies allow us to automate the analysis of large volumes of text, identify patterns, and extract meaningful information at scale. ... Sentiment analysis is the process of determining the emotional tone behind words. It analyzes language to understand attitudes, opinions, and emotions expressed within text and identify whether a piece of text is positive, negative, or neutral.


Is Synthetic Data the Future of AI Model Training?

It is likely that the use of synthetic data will increase in the AI space. Gartner anticipates that it will outweigh the use of real data in AI models by 2030. “The use of it is going to grow over time, and if done correctly, [it will] allow us to create more evolved, more powerful, and more numerous models to inform the software that we're building,” Brown predicts. That potential future seems bright, but the road there is likely to come with a learning curve. “Mistakes are going to be made almost undoubtedly in the use of synthetic data initially. You're going to forget a key metric that would judge quality of data,” says Brown. “You're going to implement a biased model of some sort or a model that hallucinates maybe more than a previous model did.” Mistakes may be inevitable, but there will be new ways to combat them. As the use of synthetic data scales, the development of tools for robust quality checks will need to as well. “Just the same way that we've kept food quality high, we [need to] do the same thing to keep the model quality high,” Hazard argues.


Are You Sabotaging Your Cybersecurity Posture?

When ITDR entered the picture in 2020, it was in response to a cybersecurity industry struggling to protect suddenly remote COVID-era workforces with existing identity and access management (IAM) solutions. ... Organizations should never attempt to solve cybersecurity issues they’re not prepared to handle. Investing in the right specialists — whether in-house or externally — and ongoing training is essential to maintaining strong defenses. Your organization will fall behind quickly if your team isn’t continuously evolving. Where business leaders are concerned, cybersecurity is often an attractive place to trim expenses. But businesses simply cannot cut their cybersecurity budget and hope they don’t suffer a breach. Hackers aren’t stopping, so you can’t either. ... Operating on an “it won’t happen to us” mindset will always get your organization in trouble. When it comes to strengthening your organization’s cybersecurity posture, a shift from a reactive to a proactive mindset is crucial to staying ahead of evolving threats and preventing costly and damaging breaches. A comprehensive, identity-focused cybersecurity is the best way to proactively defend against threats. 


Millions of Kia Vehicles Open to Remote Hacks via License Plate

The researchers found that it was relatively easy to register a Kia dealer account and authenticate it to the account. They could then use the generated access token to call APIs reserved for use by dealers, for things like vehicle and account lookup, owner enrollment, and several other functions. After some poking around, the researchers found that they could use their access to the dealer APIs to enter a vehicle's license-plate information and retrieve data that essentially allowed them to control key vehicle functions. These included functions like turning the ignition on and off, remotely locking and unlocking vehicles, activating its headlights and horn, and determining its exact geolocation. In addition, they were able to retrieve the owner's personally identifying information (PII) and quietly register themselves as the primary account holder. That meant they had control of functions normally available only the owner. The issues affected a range of Kia model years, from 2024 and 2025 all the way back to 2013. With the older vehicles, the researchers developed a proof-of-concept tool that showed how anyone could enter a Kia's vehicle license plate info and in a matter of 30 seconds execute remote commands on the vehicle.


How AI is reshaping accounting

For a while, the finance industry started to consider how to provide better information to guide investment decisions beyond just financial performance and ESG through integrated reporting. The term has fallen out of vogue. ... The corollary to continual close is that businesses will be able to make decisions using real-time data. Forrester predicts that over 70% of SMBs will integrate real-time data into financial decisions, empowering them to drive growth and innovation by 2030. Harris acknowledges that today, not all business is captured in real-time. Existing tools and infrastructure are insufficient to capture everything with the assurance that it is reliable. So, accounting data can get out of step by a few days to weeks. The vision is that with the right technology, particularly AI, they can take that delay down to zero to keep accounting data in lockstep with the business.
New opportunities The last prediction is that AI will automate many routine tasks and free accountants to focus on strategic thinking and provide business insights. This will create opportunities for accountants to expand into new roles that improve business strategy and facilitate innovation. 


Harnessing AI and knowledge graphs for enterprise decision-making

Whether a company’s goal is to increase customer satisfaction, boost revenue, or reduce costs, there is no single driver that will enable those outcomes. Instead, it’s the cumulative effect of good decision-making that will yield positive business outcomes. It all starts with leveraging an approachable, scalable platform that allows the company to capture its collective knowledge so that both humans and AI systems alike can reason over it and make better decisions. Knowledge graphs are increasingly becoming a foundational tool for organizations to uncover the context within their data. What does this look like in action? Imagine a retailer that wants to know how many T-shirts it should order heading into summer. A multitude of highly complex factors must be considered to make the best decision: cost, timing, past demand, forecasted demand, supply chain contingencies, how marketing and advertising could impact demand, physical space limitations for brick-and-mortar stores, and more. We can reason over all of these facets and the relationships between using the shared context a knowledge graph provides.


Data Blind Spots and Data Opportunities: What Banks and Credit Unions May Be Missing

Financial services leaders understand that getting the deal done is only half the battle. Effective execution of a merger or acquisition is famously difficult: across all industries, between 70% and 90% of mergers and acquisitions fail to achieve their intended goals or create shareholder value, according to research by McKinsey, Harvard Business Review and others. These failures can be due to a range of factors, including poor strategic fit, cultural clashes, integration challenges, or failure to realize projected synergies. For financial institutions in particular — FDIC data since 2019 indicates that some 4-5% of insured depositories merge annually—M&A can be a way of life and effective integration demands a data-first approach. When management data — such as financial reports, risk assessments, and accountholder information—is consolidated quickly, both institutions can harmonize their strategies, avoid duplicative efforts, and identify risks and synergies earlier. This data integration allows leadership teams to monitor KPIs, streamline operations, and make informed decisions that align with the newly combined FI’s objectives.



Quote for the day:

"Effective team leaders realize they neither know all the answers, nor can they succeed without the other members of the team." -- Katzenbach & Smith