Daily Tech Digest - October 01, 2024

9 types of phishing attacks and how to identify them

Different victims, different paydays. A phishing attack specifically targeting an enterprise’s top executives is called whaling, as the victim is considered to be high-value, and the stolen information will be more valuable than what a regular employee may offer. The account credentials belonging to a CEO will open more doors than an entry-level employee. The goal is to steal data, employee information, and cash. ... Clone phishing requires the attacker to create a nearly identical replica of a legitimate message to trick the victim into thinking it is real. The email is sent from an address resembling the legitimate sender, and the body of the message looks the same as a previous message. The only difference is that the attachment or the link in the message has been swapped out with a malicious one. ... Snowshoeing, or “hit-and-run” spam, requires attackers to push out messages via multiple domains and IP addresses. Each IP address sends out a low volume of messages, so reputation- or volume-based spam filtering technologies can’t recognize and block malicious messages right away. Some of the messages make it to the email inboxes before the filters learn to block them.


The End Of The SaaS Era: Rethinking Software’s Role In Business

While the traditional SaaS model may be losing its luster, software itself remains a critical component of modern business operations. The key shift is in how companies think about and utilize software. Rather than viewing it as a standalone business model, forward-thinking entrepreneurs and executives are beginning to see software as a powerful tool for creating value in other business contexts. ... Consider a hypothetical scenario where a tech company develops an AI-powered inventory management system that dramatically improves efficiency for retail businesses. Instead of simply selling this system as a SaaS product, the company could use it as leverage to acquire successful retail operations. By implementing their proprietary software, they could significantly boost the profitability of these businesses, creating value far beyond what they might have captured through traditional software licensing. ... Proponents of this new approach argue that while others will eventually catch up in terms of software capabilities, the first-movers will have already used their technological edge to acquire valuable real-world assets. 


How Agentless Security Can Prevent Major Ops Outages

An agentless security model is a modern way to secure cloud environments without installing agents on each workload. It uses cloud providers’ native tools and APIs to monitor and protect assets like virtual machines, containers and serverless functions. Here’s how it works: Data is collected through API calls, providing real-time insights into vulnerabilities. A secure proxy ensures seamless communication without affecting performance. This model continuously scans workloads, offering 100% visibility and detecting issues without disruption. ... Instead of picking between agent-based and agentless security, you can use both together. Agent-based security works best for stable, less-changing systems. It offers deep, ongoing monitoring when things stay the same. On the other hand, agentless security is great for fast-paced cloud setups where new workloads come and go often. It gives real-time insights without needing to install anything, making it flexible for larger cloud systems. A hybrid approach gives you stronger protection and keeps up with changing threats, making sure your defenses are ready for whatever comes next.


The inner workings of a Conversational AI

The initial stage of interaction between a user and an AI system involves input processing. When a user submits a prompt, the system undergoes a series of preprocessing steps to transform raw text into a structured format suitable for machine comprehension. Natural Language Processing (NLP) techniques are employed to break down the text into individual words or tokens, a process known as tokenization. ... Once the system has a firm grasp of the user’s intent through input processing, it embarks on the crucial phase of knowledge retrieval. This involves sifting through vast repositories of information to extract relevant data. Traditional information retrieval techniques like BM25 or TF-IDF are employed to match the processed query with indexed documents. An inverted index, a data structure mapping words to the documents containing them, accelerates this search process. ... With relevant information gathered, the system transitions to the final phase: response generation. This involves constructing a coherent and informative text that directly addresses the user’s query. Natural Language Generation (NLG) techniques are employed to transform structured data into human-readable language.


Can We Ever Trust AI Agents?

The consequences of misplaced trust in AI agents could be dire. Imagine an AI-powered financial advisor that inadvertently crashes markets due to a misinterpreted data point, or a healthcare AI that recommends incorrect treatments based on biased training data. The potential for harm is not limited to individual sectors; as AI agents become more integrated into our daily lives, their influence grows exponentially. A misstep could ripple through society, affecting everything from personal privacy to global economics. At the heart of this trust deficit lies a fundamental issue: centralization. The development and deployment of AI models have largely been the purview of a handful of tech giants. ... The tools for building trust in AI agents already exist. Blockchains can enable verifiable computation, ensuring that AI actions are auditable and traceable. Every decision an AI agent makes could be recorded on a public ledger, allowing for unprecedented transparency. Concurrently, advanced cryptographic techniques like trusted execution environment machine learning (TeeML) can protect sensitive data and maintain model integrity, achieving both transparency and privacy.


Reducing credential complexity with identity federation

One potential challenge organizations may encounter when implementing federated identity management in cross-organization collaborations is ensuring a seamless trust relationship between multiple identity providers and service providers. If the trust isn’t well established or managed, it can lead to security vulnerabilities or authentication issues. Additionally, the complexity of managing multiple identity providers can become problematic if there is a need to merge user identities across systems. For example, ensuring that all identity providers fulfill their roles without conflicting or creating duplicate identities can be challenging. Finally, while federated identity management improves convenience, it can come at the cost of time-consuming engineering and IT work to set up and maintain these IdP-SP connections. Traditional in-house implementation may also mean these connections are 1:1 and hard-coded, which will make ongoing modifications even tougher. Organizations need to balance the benefits of federated identity management against the time and cost investment needed, whether they do it in-house or with a third-party solution.


AI: Maximizing innovation for good

Businesses need to understand that AI technology will be here to stay. Strong AI strategies consider the purpose and objectives of considering AI, explaining the processes for businesses to prove value and absorb the rapid pace of change, considering the technology itself. Implementation needs to ensure that solutions mesh effectively with IT infrastructure that’s already in place. Digitalization, digital transformation, and upgrading legacy systems, as overarching initiatives, require planning and understanding of how they will impact wider business functions. That’s not to say it needs to be slow or cumbersome, however – one of the joys on AI is the ease with which it can put powerful new capabilities in the hands of teams. When due diligence is conducted effectively, AI integration can become the lynchpin to elevate business practices – boosting productivity, efficiency, and lowering costs. The opportunities for improvements cannot be understated, especially when looking at wider settings outside of just industrial or financial sectors. Ultimately, overreaching when implementing AI, can create a situation where integrated tools muddy the water and dilute the effectiveness of their intended use.


The Path of Least Resistance to Privileged Access Management

While PAM allows organizations to segment accounts, providing a barrier between the user’s standard access and needed privileged access and restricting access to information that is not needed, it also adds a layer of internal and organizational complexity. This is because of the impression it removes user’s access to files and accounts that they have typically had the right to use, and they do not always understand why. It can bring changes to their established processes. They don’t see the security benefit and often resist the approach, seeing it as an obstacle to doing their jobs and causing frustration amongst teams. As such, PAM is perceived to be difficult to introduce because of this friction. ... A significant gap in the PAM implementation process lies in the lack of comprehensive awareness among administrators. They often do not have a complete inventory of all accounts, the associated access levels, their purposes, ownership, or the extent of the security issues they face. Although PAM solutions possess the capability for scanning and discovering privileged accounts, these solutions are limited by the scope of the instructions they receive, thus providing only partial visibility into system access and usage.


Microsoft researchers propose framework for building data-augmented LLM applications

“Data augmented LLM applications is not a one-size-fits-all solution,” the researchers write. “The real-world demands, particularly in expert domains, are highly complex and can vary significantly in their relationship with given data and the reasoning difficulties they require.” To address this complexity, the researchers propose a four-level categorization of user queries based on the type of external data required and the cognitive processing involved in generating accurate and relevant responses: – Explicit facts: Queries that require retrieving explicitly stated facts from the data. – Implicit facts: Queries that require inferring information not explicitly stated in the data, often involving basic reasoning or common sense. – Interpretable rationales: Queries that require understanding and applying domain-specific rationales or rules that are explicitly provided in external resources. – Hidden rationales: Queries that require uncovering and leveraging implicit domain-specific reasoning methods or strategies that are not explicitly described in the data. Each level of query presents unique challenges and requires specific solutions to effectively address them.


Unleashing the Power Of Business Application Integration

In many cases, businesses are replacing their legacy software solutions with a modular selection of applications hosted within a public cloud environment. Given the increasing maturity of this market, there is now a range of application stores and marketplaces from the likes of AWS, Microsoft and Google. These have made it much easier for IT teams to identify, purchase and integrate proven applications as part of a bespoke, enterprise-wide ERP strategy. ... once IT teams have selected and integrated the right business applications within their environment, the next step is to focus on data strategy. The main objective here should be to ensure that data is of the highest quality and can be used to address a diverse range of key business objectives, from driving profit, efficiency and innovation to improving customer service. This process can be complex and challenging, but there are a number of steps organisations can take to fully exploit their data assets. These include optimising the performance and availability of an existing data environment and prioritising data systems migration.



Quote for the day:

"The first step toward success is taken when you refuse to be a captive of the environment in which you first find yourself." -- Mark Caine

Daily Tech Digest - September 30, 2024

What Will Be the Next Big Thing in AI?

The next big thing in AI will likely be advanced multimodal models that can seamlessly integrate and process different types of data, including text, images, audio, and video, in more human-like ways, says Dinesh Puppala, regulatory affairs lead at Google. "We're moving beyond models that specialize in one type of data toward AI systems that can understand and generate across multiple modalities simultaneously, much like humans do," he notes. Advanced multimodal models will enable more natural and context-aware human-AI interactions. "They'll be better at understanding nuanced queries, interpreting visual and auditory cues, and providing more holistic and relevant responses," Puppala predicts. ... Metacognition in AI -- systems that can think about the way they think -- is on the mind of Isak Nti Asare, co-director of the cybersecurity and global policy program at Indiana University. "This capability, often described as AI self-awareness, is a necessary frontier to cross if we are to build trustworthy systems that can explain their decisions." Current AI systems, while advanced, often operate as "black boxes" where even their creators cannot fully explain their outputs. 


Best Practices for Sunsetting Mainframe Applications

The first crucial step in migrating from a mainframe to the cloud is the discovery phase. During this phase, organizations must conduct a thorough assessment of their current mainframe environment, including architecture, applications, data, dependencies, and workflows. This comprehensive understanding helps in identifying potential risks and planning the migration process effectively. The insights gained are crucial for setting the stage for the subsequent cost-benefit analysis (CBA), ensuring all stakeholders are on board with the proposed changes. A detailed CBA is essential to evaluate the financial feasibility and potential returns of the migration project. This analysis should account for all costs associated with the migration, including software licensing, cloud storage fees, and ongoing maintenance costs. It should also highlight the benefits, such as improved operational efficiency and reduced downtime, which are crucial for gaining stakeholder support. ... Effective risk management is crucial for a successful migration. This involves ensuring the availability of subject matter experts, comprehensive planning, and addressing potential issues with legacy systems. 


Security spending signals major role change for CISOs and their teams

“Expected to do more with less,” CISOs are shifting their focus, Kalinov adds. “Instead of beefing up their internal teams, they’re focusing on risk management, regulatory compliance, and keeping C-suite executives aware of the evolving security landscape,” Kalinov says. James Neilson, SVP of international sales at cybersecurity vendor OPSWAT, believes the increasing allocation of security budgets toward software and services rather than staff reflects the CISO’s evolving role from managing internal teams toward becoming a more strategic, technology-driven leader. “This trend also indicates that they’re taking on a more prominent role in risk management, ensuring that outsourced services complement internal capabilities while maintaining agility in response to evolving threats,” Neilson says. As a result, security organizations are also undergoing a shift from traditionally siloed, in-house approaches toward a more integrated, outsourced, and technology-driven model, Neilson argues. ... “Organizations increasingly rely on elements of external managed services and advanced automation tools to manage cybersecurity, focusing internal resources on understanding the business and its risks, defining higher-level strategy, oversight, and risk management,” Neilson contends.


Shadow AI, Data Exposure Plague Workplace Chatbot Use

The issue is that most of the most prevalent chatbots capture whatever information users put into prompts, which could be things like proprietary earnings data, top-secret design plans, sensitive emails, customer data, and more — and send it back to the large language models (LLMs), where it's used to train the next generation of GenAI. ... ChatGPT’s creator, OpenAI, warns in its user guide, "We are not able to delete specific prompts from your history. Please don't share any sensitive information in your conversations." But it's hard for the average worker to constantly be thinking about data exposure. Lisa Plaggemier, executive director of NCA, notes one case that illustrates how the risk can easily translate into real-world attacks. "A financial services firm integrated a GenAI chatbot to assist with customer inquiries," Plaggemier tells Dark Reading. "Employees inadvertently input client financial information for context, which the chatbot then stored in an unsecured manner. This not only led to a significant data breach, but also enabled attackers to access sensitive client information, demonstrating how easily confidential data can be compromised through the improper use of these tools."


Can AWS hit its sustainability targets and win the AI race?

“Once we achieve that goal, we're looking at what's next beyond that?,” Walker says. “As you look beyond just wind and solar, we need to look at what else is in our tool belt, especially looking further ahead to 2040 and how we're going to reach those ultimate goals, and carbon-free energy sources are the next evolution of that.” When asked whether carbon-free energy to the company means nuclear, geothermal, or something else, Walker says the company is open. “We're not limiting the options; we're looking beyond the traditional renewable sources and seeing what else there is. Carbon-free energy sources are going to be one of the tools that we're going to double down on and start looking at.” ... When asked if AWS will look to acquire more data centers close to nuclear plants or merely sign more PPAs that involve nuclear power, Walker says the company is looking at “all of the above.” “We haven't limited our options in terms of capacity. Depending on where we're building and at the rate we need to scale, [it's] certainly going to be part of the conversation.” Longer term, fusion energy could perhaps power the company’s data centers. Microsoft and OpenAI have invested in Helion, which is promising to crack the elusive technology before 2030. Google has invested in Tae Technologies.


6 ways to apply automation in devsecops

Securing continuous development processes is an extension of collaboration security. In most organizations today, multiple individuals on multiple teams write code every day — fixing bugs, adding new features, improving performance, etc. Consider an enterprise with three different teams contributing to the application code. Each is responsible for its own area. Once Team 1 checks in updated code, the build manager needs to ensure that this new code is compatible with code already contributed by Teams 2 and 3. The build manager creates a new build and scans it to ensure there are no vulnerabilities. With so much code being contributed, automation is critical. Only by automating the build creation, compatibility, and approval cycle can a business ensure that each step is always taken and done in a consistent manner. ... For larger enterprises, which may have thousands of developers checking in code daily, automation is a matter of survival. Even smaller companies must begin putting automated processes in place if they want to keep their developers productive while ensuring the security of their code.


AI, AI Everywhere! But are we Truly Prepared?

AI models are reflections of the massive database they feed on and the entire internet is on its plate. Every time a user runs a query or prompts the model for a certain search requirement, the AI runs it through the maximum accessible data in its capacity, figures out relevant touchpoints, and frames the responses as demanded by the user using its intelligent capabilities. However, not surprising that the ever-learning and self-evolving capabilities of AI models suck in more power than its search and response process. The volume of users due to the soaring popularity of the technology further adds to the power consumption. ... The exercise lights up a directional path for the artificially intelligent technology to learn and evolve in accordance. This entire process of training an AI model can range anywhere from a few minutes to several months. And, throughout the process, GPUs powering the machines keep running daylong eating into large volumes of power. On the bright side, experts have pointed out that specialised AI models are significantly more efficient in power consumption than generic models. 


Ransomware attackers hop from on-premises systems to cloud

“Storm-0501 is the latest threat actor observed to exploit weak credentials and over-privileged accounts to move from organizations’ on-premises environment to cloud environments. They stole credentials and used them to gain control of the network, eventually creating persistent backdoor access to the cloud environment and deploying ransomware to the on-premises,” Microsoft shared last week. ... “Microsoft Entra Connect Sync is a component of Microsoft Entra Connect that synchronizes identity data between on-premises environments and Microsoft Entra ID,” Microsoft explained. “We can assess with high confidence that in the recent Storm-0501 campaign, the threat actor specifically located Microsoft Entra Connect Sync servers and managed to extract the plain text credentials of the Microsoft Entra Connect cloud and on-premises sync accounts. The compromise of the Microsoft Entra Connect Sync account presents a high risk to the target, as it can allow the threat actor to set or change Microsoft Entra ID passwords of any hybrid account.” The second approach – hijacking a Domain Admin user account that has a respective user account in Microsoft Entra ID – is also possible.


How AI is transforming business today

“We’re seeing lots of efficiencies where back, middle, and front-end workflows are being automated. So, yes, you can automate your existing processes, and that’s good and you can get a 20% [improvement in efficiency]. But the real gain is to reimagine the process itself,” she says. In fact, the gains AI can bring when used to reimagine processes is so significant that she says AI challenges the very concept of “process” itself. That’s because organizations can use AI to devise ways to reach specific desired outcomes without having a bias toward keeping and improving existing workflows. “Say you want to increase customer satisfaction by 35%. That’s the input. It’s less about how the process works. The process itself becomes almost irrelevant,” she explains. “The technology is good at achieving an object, a goal, and the concept of process itself, the sequence itself, is blown away. That is conceptually a big shift when you think of the enterprise, which is built on three things: people, process, and technology, and here’s a technology — AI — that doesn’t care about a process but is instead focused on outcome. That is truly disruptive.”


California Gov. Newsom Vetoes Hotly Debated AI Safety Bill

Newsom said he had asked generative AI experts, including Dr. Li, Tino Cuéllar of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research, and Jennifer Tour Chayes from the College of Computing, Data Science, and Society at UC Berkeley, to help California develop "workable guardrails" that focused on "developing an empirical, science-based trajectory analysis." He also asked state agencies to expand their assessment of AI risks from potential catastrophic events related to AI use. "We have a responsibility to protect Californians from potentially catastrophic risks of GenAI deployment. We will thoughtfully - and swiftly - work toward a solution that is adaptable to this fast-moving technology and harnesses its potential to advance the public good," he said. Among the AI bills Newsom has signed are SB 896, which requires California's Office of Emergency Services to expand its work assessing AI's potential threat to critical infrastructure. The governor also directed the agency to undertake the same risk assessment with water infrastructure providers and the communications sector.



Quote for the day:

"Have the dogged determination to follow through to achieve your goal, regardless of circumstances or whatever other people say, think, or do." -- Paul Meyer

Daily Tech Digest - September 29, 2024

Updating Enterprise Technology to Scale to ‘AI Everywhere’

Operational systems with significant unstructured data will face substantial re-architecting due to generative AI’s ability to make use of previously underutilized data sources. In our experience, the most common solution patterns for generative AI use cases in operational systems fall within the areas of content generation, knowledge management, and reporting and documentation ... As generative AI model use cases get deployed across critical systems and complexity increases, it will put further demands on collaboration, quality control, reliability, and scalability. AI models will need to be treated with the same discipline as software code by adopting MLOps processes that use DevOps to manage models through their life cycle. Companies should set up a federated AI development model in line with the AIaaS platform. This should define the roles of teams that produce and consume AI services, as well as the processes for federated contribution and how datasets and models are to be shared. Given the pace of evolution of generative AI, it is also imperative to create AI-first software development processes that allow for rapid iteration of new solutions and architectures. 


EPSS vs. CVSS: What's the Best Approach to Vulnerability Prioritization?

EPSS is a model that provides a daily estimate of the probability that a vulnerability will be exploited in the wild within the next 30 days. The model produces a score between 0 and 1 (0 and 100%), with higher scores indicating a higher probability of exploitation. The model works by collecting a wide range of vulnerability information from various sources, such as the National Vulnerability Database (NVD), CISA KEV, and Exploit-DB, along with evidence of exploitation activity. ... By considering EPSS when prioritizing vulnerabilities, organizations can better align their remediation efforts with the actual threat landscape. For example, if EPSS indicates a high probability of exploitation for a vulnerability with a relatively low CVSS score, security teams might consider prioritizing that vulnerability over others that may have higher CVSS scores but a lower likelihood of exploitability. ... Intruder is a cloud-based security platform that helps businesses manage their attack surface and identify vulnerabilities before they can be exploited. By offering continuous security monitoring, attack surface management, and intelligent threat prioritization, Intruder allows teams to focus on the most critical risks while simplifying cybersecurity.


How To Embrace The Enterprise AI Era

As enterprises rush to adopt AI technologies, there's a growing concern about the responsible use of these powerful tools. Ramaswamy stresses the importance of a thoughtful approach to AI implementation: "We mandated very early that any models that we train needed obviously to only take data that we had free use rights on, but we said they also need to have model cards so that if there is a problem with the data source, you can go back, retrain a model without the data source." ... Developing a robust data strategy is essential for AI success. Organizations need a clear plan for managing, sharing, and leveraging data across the enterprise. This includes establishing data governance policies, ensuring data quality and consistency, and creating a unified data architecture that supports AI initiatives. A well-designed data strategy enables companies to break down silos, improve data accessibility, and create a solid foundation for AI-driven insights and decision-making. Embracing interoperability is another critical aspect of preparing for the enterprise AI era. Companies should look for solutions that support open data formats and easy integration with other tools and platforms. 


The Hidden Language of Data: How Linguistic Analysis Is Transforming Data Interpretation

Unlike conventional methods that focus on structured data, linguistic analysis delves into the complexities of human communication. It examines patterns, context, and meaning in text data, allowing us to extract trends and insights from sources like social media posts, customer reviews, and open-ended survey responses. Linguistic analysis in data science marries principles from the two fields. From linguistics, we borrow concepts like syntax (sentence structure), semantics (meaning), and pragmatics (context). These help us understand not just what words say, but how they’re used and what they imply. On the data science side, we leverage technologies like machine learning and natural language processing (NLP). These technologies allow us to automate the analysis of large volumes of text, identify patterns, and extract meaningful information at scale. ... Sentiment analysis is the process of determining the emotional tone behind words. It analyzes language to understand attitudes, opinions, and emotions expressed within text and identify whether a piece of text is positive, negative, or neutral.


Is Synthetic Data the Future of AI Model Training?

It is likely that the use of synthetic data will increase in the AI space. Gartner anticipates that it will outweigh the use of real data in AI models by 2030. “The use of it is going to grow over time, and if done correctly, [it will] allow us to create more evolved, more powerful, and more numerous models to inform the software that we're building,” Brown predicts. That potential future seems bright, but the road there is likely to come with a learning curve. “Mistakes are going to be made almost undoubtedly in the use of synthetic data initially. You're going to forget a key metric that would judge quality of data,” says Brown. “You're going to implement a biased model of some sort or a model that hallucinates maybe more than a previous model did.” Mistakes may be inevitable, but there will be new ways to combat them. As the use of synthetic data scales, the development of tools for robust quality checks will need to as well. “Just the same way that we've kept food quality high, we [need to] do the same thing to keep the model quality high,” Hazard argues.


Are You Sabotaging Your Cybersecurity Posture?

When ITDR entered the picture in 2020, it was in response to a cybersecurity industry struggling to protect suddenly remote COVID-era workforces with existing identity and access management (IAM) solutions. ... Organizations should never attempt to solve cybersecurity issues they’re not prepared to handle. Investing in the right specialists — whether in-house or externally — and ongoing training is essential to maintaining strong defenses. Your organization will fall behind quickly if your team isn’t continuously evolving. Where business leaders are concerned, cybersecurity is often an attractive place to trim expenses. But businesses simply cannot cut their cybersecurity budget and hope they don’t suffer a breach. Hackers aren’t stopping, so you can’t either. ... Operating on an “it won’t happen to us” mindset will always get your organization in trouble. When it comes to strengthening your organization’s cybersecurity posture, a shift from a reactive to a proactive mindset is crucial to staying ahead of evolving threats and preventing costly and damaging breaches. A comprehensive, identity-focused cybersecurity is the best way to proactively defend against threats. 


Millions of Kia Vehicles Open to Remote Hacks via License Plate

The researchers found that it was relatively easy to register a Kia dealer account and authenticate it to the account. They could then use the generated access token to call APIs reserved for use by dealers, for things like vehicle and account lookup, owner enrollment, and several other functions. After some poking around, the researchers found that they could use their access to the dealer APIs to enter a vehicle's license-plate information and retrieve data that essentially allowed them to control key vehicle functions. These included functions like turning the ignition on and off, remotely locking and unlocking vehicles, activating its headlights and horn, and determining its exact geolocation. In addition, they were able to retrieve the owner's personally identifying information (PII) and quietly register themselves as the primary account holder. That meant they had control of functions normally available only the owner. The issues affected a range of Kia model years, from 2024 and 2025 all the way back to 2013. With the older vehicles, the researchers developed a proof-of-concept tool that showed how anyone could enter a Kia's vehicle license plate info and in a matter of 30 seconds execute remote commands on the vehicle.


How AI is reshaping accounting

For a while, the finance industry started to consider how to provide better information to guide investment decisions beyond just financial performance and ESG through integrated reporting. The term has fallen out of vogue. ... The corollary to continual close is that businesses will be able to make decisions using real-time data. Forrester predicts that over 70% of SMBs will integrate real-time data into financial decisions, empowering them to drive growth and innovation by 2030. Harris acknowledges that today, not all business is captured in real-time. Existing tools and infrastructure are insufficient to capture everything with the assurance that it is reliable. So, accounting data can get out of step by a few days to weeks. The vision is that with the right technology, particularly AI, they can take that delay down to zero to keep accounting data in lockstep with the business.
New opportunities The last prediction is that AI will automate many routine tasks and free accountants to focus on strategic thinking and provide business insights. This will create opportunities for accountants to expand into new roles that improve business strategy and facilitate innovation. 


Harnessing AI and knowledge graphs for enterprise decision-making

Whether a company’s goal is to increase customer satisfaction, boost revenue, or reduce costs, there is no single driver that will enable those outcomes. Instead, it’s the cumulative effect of good decision-making that will yield positive business outcomes. It all starts with leveraging an approachable, scalable platform that allows the company to capture its collective knowledge so that both humans and AI systems alike can reason over it and make better decisions. Knowledge graphs are increasingly becoming a foundational tool for organizations to uncover the context within their data. What does this look like in action? Imagine a retailer that wants to know how many T-shirts it should order heading into summer. A multitude of highly complex factors must be considered to make the best decision: cost, timing, past demand, forecasted demand, supply chain contingencies, how marketing and advertising could impact demand, physical space limitations for brick-and-mortar stores, and more. We can reason over all of these facets and the relationships between using the shared context a knowledge graph provides.


Data Blind Spots and Data Opportunities: What Banks and Credit Unions May Be Missing

Financial services leaders understand that getting the deal done is only half the battle. Effective execution of a merger or acquisition is famously difficult: across all industries, between 70% and 90% of mergers and acquisitions fail to achieve their intended goals or create shareholder value, according to research by McKinsey, Harvard Business Review and others. These failures can be due to a range of factors, including poor strategic fit, cultural clashes, integration challenges, or failure to realize projected synergies. For financial institutions in particular — FDIC data since 2019 indicates that some 4-5% of insured depositories merge annually—M&A can be a way of life and effective integration demands a data-first approach. When management data — such as financial reports, risk assessments, and accountholder information—is consolidated quickly, both institutions can harmonize their strategies, avoid duplicative efforts, and identify risks and synergies earlier. This data integration allows leadership teams to monitor KPIs, streamline operations, and make informed decisions that align with the newly combined FI’s objectives.



Quote for the day:

"Effective team leaders realize they neither know all the answers, nor can they succeed without the other members of the team." -- Katzenbach & Smith

Daily Tech Digest - September 28, 2024

IoT devices will be the catalyst for the 4th industrial revolution

The impact of IoT on product quality is not just reactive but also proactive. IoT-enabled traceability systems ensure that every component of a product can be tracked from its origin to the final assembly, ensuring full compliance with industry standards and regulations. Plus, automated systems can monitor and adjust energy usage in real-time, leading to more efficient operations that lower the overall carbon footprint of a facility. By minimizing energy waste, companies will contribute to a more sustainable environment while also realizing substantial cost savings. These savings can be reinvested into research and development, driving innovation and enhancing product quality. In return, compliance eliminates unnecessary product waste and energy consumption, which then lowers the final cost for consumers while heightening brand reputation. ... By combining the real-time data collection capabilities of IoT devices with AI-driven analytics, IoT technologies can be leveraged to enable the seamless integration of clean energy sources into industrial operations. Solar, wind, and other renewable energy sources can be efficiently managed through smart grids and automated systems that balance the energy load, ensuring that clean energy is utilized to its fullest potential. 


Hackers Weaponizing PDF Files To Deliver New SnipBot Malware

They exploit the all-presence and trustworthiness of PDFs to trick victims into opening malicious files that can contain malicious links, embedded code, or vulnerabilities that allow remote code execution. Security experts at Palo Alto Networks identified recently that hackers have been actively weaponizing PDF files to deliver new SnipBot malware. ... While the SnipBot employs a multi-stage infection process that begins with a signed executable which is disguised as a “PDF.” This uses the anti-sandbox techniques like “checking process names” and “registry entries.” To evade the detection the malware makes use of “Window message-based control-flow obfuscation” and “encrypted strings.” Besides this, it downloads additional payloads like a DLL that injects code into Explorer.exe through “COM hijacking.” The core functionality of SnipBot includes ‘a backdoor (single.dll)’ that creates a “SnipMutex” and enables threat actors to ‘execute commands,’ ‘upload/download files,’ and ‘deploy extra modules.’ ... As the SnipBot, various evasion techniques, payload delivery methods, and post-infection capabilities compromise systems and exfiltrate sensitive data.


Novel Exploit Chain Enables Windows UAC Bypass

Despite the potential for privilege escalation, Microsoft refused to accept the issue as a vulnerability. After Fortra reported it, the company responded by pointing to the "non-boundaries" section of the Microsoft Security Servicing Criteria for Windows, which outlines how "some Windows components and configurations are explicitly not intended to provide a robust security boundary." ... Reguly and Fortra disagree with Microsoft's perspective. "When UAC was introduced, I think we were all sold on the idea that UAC was this great new security feature, and Microsoft has a history of fixing bypasses for security features," he says. "So if they're saying that this is a trust boundary that is acceptable to traverse, really what they're saying to me is that UAC is not a security feature. It's some sort of helpful mechanism, but it's not actually security related. I think it's a really strong philosophical difference." ... Philosophical differences aside, Reguly stresses that businesses need to be aware of the risk in allowing lower-integrity admins to escalate their privileges to attain full system controls.


How factories are transforming their operations with AI

One of the key end goals for the integration of AI in manufacturing is the establishment of 'lights-out factories' which means fully automating everything within the factory environment so that there is minimal to zero need for human input. Such is the lack of a need for human intervention that you can effectively manage the production process with the lights turned off. FANUC is one example of a company that operates a lights-out factory in Japan to build its robots, having done so since 2001. The company makes 50 robots for every 24-hour shift, according to the Association for Manufacturing Technology, with the factory running unsupervised for up to 30 days without human input. Automotive manufacturing is another sector in which AI has been a major positive influence. BMW's AIQX automates certain quality control processes by using sensor technology and AI. Algorithms analyze the data they record in real time and they send employees feedback immediately. It can quickly detect anomalies on the assembly line. Similarly, Rolls Royce has melded data analytics with AI, pulling in masses of data from in-service engines in real time and feeding this into digital twins. 


Beyond encryption: Hidden dangers in the wake of ransomware incidents

One of the most insidious threats in the post-ransomware landscape is the potential presence of multiple threat actors within a compromised environment. This scenario, while relatively rare, can have devastating consequences for victim organizations. The root of this problem often lies in the cyber incident ecosystem itself, particularly in the use of initial access brokers (IABs) by ransomware groups. These IABs, motivated by profit, may sell access to the same compromised network to multiple malicious actors. The result can be a perfect storm of cyber activity, with different groups vying for control of the same systems. ... Another vector for multiple-actor intrusions comes from an unexpected source: the tools used by information security professionals themselves. Malvertising campaigns have become increasingly sophisticated, targeting legitimate software distribution channels to spread compromised versions of popular security tools. Ironically, the very applications designed to protect systems can become Trojan horses for malicious actors. ... The complexity of modern cyber threats underscores the necessity of comprehensive forensic analysis following any security incident.


Prioritize Robust Engineering Over Overblown GenAI Promises

Beyond tackling data quality and scalability concerns, this necessary shift towards engineering innovation will lead to developing tools and frameworks that better support AI workflows, including handling large volumes of unstructured data (including images and videos). That, in turn, will foster a more collaborative and integrated approach between AI and data management practices. As the AI and data stacks complement each other, we can expect more cohesive and innovative solutions that address AI implementation’s technical and operational challenges. ... This maturation process promises substantial benefits beyond the realm of developers and engineers. Just as the dot-com bubble burst led to the refinement and widespread adoption of internet technologies, the current focus on data curation and engineering in AI will pave the way for transformative applications across various industries. Imagine AI-powered healthcare diagnostics that rely on meticulously curated data sets or financial systems that leverage AI for predictive analytics to manage risks more effectively. These advancements aren’t just about enhancing technical capabilities; they’re about improving outcomes for society as a whole.


IT leaders weigh up AI’s role to improve data management

“The important thing in data management is having a solid disaster recovery plan,” says Macario. “In fact, security for an NGO like ours is both a cyber and physical problem because not only are we the target of attacks, but we operate in war zones, where the services provided aren’t always reliable and, in the event of failures, hardware replacement parts are difficult to find.” Innovative encryption and geographic data backup technologies are applied, in particular immutable cloud technology that protects against ransomware. These are supported by AI for endpoint protection. User identities are also managed on the Azure Entra ID platform, which has integrated AI and warns of suspicious activity in real time. ... “We turned to the big technology players to solve the problem and the LLM algorithms led to a turning point, because they allowed us to carry out the analyses,” says Macario. “These are used by our Medical Division departments to analyze access to care and improve quality, obtain statistics, create an archive, and understand what instruments, drugs, and doctors we need in a war context. The data form a scientific basis on which to base our intervention and our ability to report the effects of war on civilian populations.”


Is it possible to save money and run on a public cloud?

In the early days of cloud computing, big providers promoted the migration of applications and data to the cloud without modification or modernization. The advice was to fix it when it got there, not before. Guess what? Workloads were never fixed or modernized. These lift-and-shift applications and data consumed about three times the resources enterprises thought they would. This led to a disenchantment with public cloud providers, even though enterprises also bore some responsibility. ... High cloud costs usually stem from the wrong cloud services or tools, flawed application load estimates, and developers who designed applications without understanding where the cloud saves money. You can see this in the purposeful use of microservices as a base architecture. ... The key to winning this war is planning. You’ll need good architecture and engineering talent to find the right path. This is probably the biggest reason we haven’t gone down this road as often as we should. Enterprises can’t find the people needed to make these calls; it’s hard to find that level of skill. Cloud providers can also be a source of help. Many have begun to use the “O word” (optimization) and understand that to keep their customers happy, they need to provide some optimization guidance. 


Beyond Compliance: Leveraging Security Audits for Enhanced Risk Management

One of the most effective ways to approach risk management in an organization is through a comprehensive security audit. Security audits objectively assess layers of an organization’s security controls, established system and operational policies, and various document procedures. Rather than simply passing or failing a defined list of compliance protocols, a security audit examines all elements of an organization’s security posture. This includes looking for potential weak points in connected networks and systems and finding areas which may be useful but could be improved. ... Security auditing processes can also be built into the organization’s disaster recovery initiatives. As the business tests its incident response protocols throughout the year, pairing this process with a formal audit helps the organization to be better prepared to respond more effectively to operational disruptions. However, the benefits of a security audit aren’t just associated with minimizing operational risks. This proactive security approach can also play an impactful role when demonstrating the organization’s commitment to their customer’s data privacy.


Security, AIOps top mainframe customer challenges

“The increased prioritization of AIOps reflects surging interest in the implementation of emerging technologies on the mainframe. Those reporting the adoption of AIOps on the mainframe increased [9%] from the 2023 BMC Mainframe Survey, while 76% of respondents reported the use of generative AI [genAI] in their organizations,” McKenney wrote. “The power of AI/ML and genAI open a new world of possibility in IT management. Organizations are leveraging these technologies throughout their IT ecosystems to gain real-time insight into security postures, automate issue resolution, gain critical business insight, and onboard and train new personnel,” McKenney wrote. ... Its BMC AMI Platform will feature the BMC AMI Assistant, a chat-based, AI-powered assistant available for developers, operators, system programmers, and IT managers to use for real-time explanations, support, and automation, the company stated. “Whether help is needed to debug code, understand system processes, or make informed decisions and take actions, the BMC AMI Assistant will provide expert guidance instantly, enhancing productivity and reducing downtime. Users will leverage BMC AMI Assistant Tools to capture their local knowledge and integrate it seamlessly into the BMC AMI Assistant,” McKenny wrote in a BMC blog.



Quote for the day:

"The only way to achieve the impossible is to believe it is possible." -- Charles Kingsleigh

Daily Tech Digest - September 27, 2024

What happens when everybody winds up wearing ‘AI body cams’?

The first body cams were primitive. They were enormous, had narrow, 68-degree fields of view, had only 16GB of internal storage, and had batteries that lasted only four hours. Body cams now usually have high-resolution sensors, GPS, infrared for low-light conditions, and fast charging. They can be automatically activated through Bluetooth sensors, weapon release, or sirens. They use backend management systems to store, analyze, and share video footage. The state of the art — and the future of the category — is multimodal AI. ... Using such a system in multimodal AI, a user could converse with their AI agent, asking questions about what the glasses were pointed at previously. These glasses will almost certainly have a dashcam-like feature where video is constantly recorded and deleted. Users can push a button to capture and store the past 30 seconds or 30 minutes of video and audio — basically creating an AI body cam worn on the face. Smart glasses will be superior to body cams, and over time, AI body cams for police and other professionals will no doubt be replaced by AI camera glasses. This raises the question: When everybody has AI body cams — specifically glasses with AI body cam functionality — nwhat does society then look like?


Aligning Cloud Costs With Sustainability and Business Goals

AI is poised for democratization, similar to the cloud. Users will have the choice and ability to use multiple models for numerous use cases. Future trends indicate a rise in culturally aware and industry-specific models that will further facilitate the democratization of AI. Singapore's National Research Foundation launched AI Singapore - a national program to enhance the country's AI capabilities - to make its LLMs more culturally accurate, localized and tailored to Southeast Asia. AWS is working with Singapore public organizations to develop innovative, industry-first solutions powered by AI and gen AI, including AI Singapore's SEA-LION. Building on AWS' scalable compute infrastructure, SEA-LION is a family of LLMs that is specifically pre-trained and instruct-tuned for Southeast Asian languages and cultures. WS released the Amazon Bedrock managed service to support gen AI deployments for large enterprises. It now provides easy access to multiple large language models and foundation models from AI21 Labs, Anthropic, Cohere, Meta and Stability AI through a single API, along with a broad set of capabilities organizations need to build gen AI applications with security, privacy and responsible AI.


Fortifying the Weakest Link: How to Safeguard Against Supply Chain Cyberattacks

Failures in systems and processes by third parties can lead to catastrophic reputational and operational damage. It is no longer sufficient to merely implement basic vendor management procedures. Organizations must also take proactive measures to safeguard against third-party control failures. ... Protect administrative access to the tools and applications used by DevOps teams. Enable secure application configuration via secrets and authenticate applications and services with high confidence. Mandate that software suppliers certify and extend security controls to cover microservices, cloud, and DevOps environments. ... Ensure that your systems and those of your suppliers are regularly updated and patched for known vulnerabilities. Prevent the use of unsupported or outdated software that could introduce new vulnerabilities. ... Configure cloud environments to reject authorization requests involving tokens that deviate from accepted norms. For on-premises systems, follow the National Security Agency’s guidelines by deploying a Federal Information Processing Standards (FIPS)-validated Hardware Security Module (HSM) to store token-signing certificate private keys. HSMs significantly reduce the risk of key theft by threat actors.


Are hardware supply chain attacks “cyber attacks?”

In the case of hardware supply chain attacks, malicious actors infiltrate the supply of devices, or the physical manufacturing process of pieces of hardware and purposefully build in security flaws, faulty parts, or backdoors they know they can take advantage of in the future, such as malicious microchips on a circuit board. For Cisco’s part, the Cisco Trustworthy technologies program, including secure boot, Cisco Trust Anchor module (TAm), and runtime defenses give customers the confidence that the product is genuinely from Cisco. As I was thinking about the threat of hardware supply chain attacks, I was left wondering who, exactly, should be tasked with solving this problem. And I think I’ve decided the onus falls on several different sectors. It shouldn’t just be viewed as a cybersecurity issue, because for a hardware supply chain attack, an adversary would likely need to physically infiltrate or tamper with the manufacturing process. Entering a manufacturing facility or other stops along the logistics chain would require some level of network-level manipulation, such as faking a card reader or finding a way to trick physical defenses — that’s why Cisco Talos Incident Response looks for these types of things in Purple Team exercises.


How The Digital Twin Helps Build Resilient Manufacturing Operations

The digital twin is a sophisticated tool. It must be a true working virtual replica of the physical asset. Anything short of that means problems. To make it all work, consider several key aspects. You will most likely need multiple digital twins of the same physical asset. At least one digital twin should be online most of the time, collecting data from the real world. Other copies of the digital twin might be offline at times, but they use the real-world data in various training situations and for optimizing the equipment and the line. Getting data from the real world into the digital twin is one of the best and most common uses for the Industrial Internet of Things (IIoT). The latest digital twins are incorporating AI to help optimize the design process, learn from previous designs and create new equipment designs. AI helps create operator training scenarios and optimizes the equipment and production line. AI learns from the optimization process and, even with new wrinkles thrown into the real world, learns how to optimize the optimization process. It helps troubleshoot the equipment, finding problems quickly, long before they become problems.


3 tips for securing IoT devices in a connected world

Comprehensive visibility refers to an organization’s ability to identify, monitor and remotely manage each individual device connected to its network. Gaining this level of visibility is a crucial first step for maintaining a robust security posture and preventing unauthorized access or potential breaches. ... Addressing common vulnerabilities like built-in backdoors and unpatched firmware is essential for maintaining the security of connected devices. Built-in backdoors are hidden or undocumented access points in a device’s software or firmware that allow unauthorized access to the device or its network. These backdoors are often left by manufacturers for maintenance or troubleshooting purposes but can be exploited by attackers if not properly secured. ... One important step in secure deployment is limiting access to critical resources using network segmentation. Network segmentation involves dividing a network into smaller, isolated segments or subnets, each with its own security controls. This practice limits the movement of threats across the network, reducing the risk of a compromised IoT device leading to a broader security breach. 


Why countries are in a race to build AI factories in the name of sovereign AI

“The number of sovereign AI clouds is really quite significant,” Huang said in the earnings call. He said Nvidia wants to enable every company to build its own custom AI models. The motivations weren’t just about keeping a country’s data in local tech infrastructure to protect it. Rather, they saw the need to invest in sovereign AI infrastructure to support economic growth and industrial innovation, said Colette Kress, CFO of Nvidia, in the earnings call. That was around the time when the Biden administration was restricting sales of the most powerful AI chips to China, requiring a license from the U.S. government before shipments could happen. That licensing requirement is still in effect. As a result, China reportedly began its own attempts to create AI chips to compete with Nvidia’s. But it wasn’t just China. Kress also said Nvidia was working with the Indian government and its large tech companies like Infosys, Reliance and Tata to boost their “sovereign AI infrastructure.” Meanwhile, French private cloud provider Scaleway was investing in regional AI clouds to fuel AI advances in Europe as part of a “new economic imperative,” Kress said. 


Is Spring AI Strong Enough for AI?

While the Spring framework itself does not have a dedicated AI library, it has proven to be an effective platform for developing AI-driven systems when combined with robust AI/ML frameworks. Spring Boot and Spring Cloud provide essential capabilities for deploying AI/ML models, managing REST APIs, and orchestrating microservices, all of which are crucial components for building and deploying production-ready AI systems. ... Spring, typically known as a versatile enterprise framework, showcases its effectiveness in high-quality AI deployments when combined with its robust scalability, security, and microservice architecture features. Its seamless integration with machine learning models, especially through REST APIs and cloud infrastructure, positions it as a formidable choice for enterprises seeking to integrate AI with intricate business systems. Nevertheless, for more specialized tasks such as model versioning, training orchestration, and rapid prototyping, AI-specific frameworks like TensorFlow Serving, Kubernetes, and MLflow offer tailored solutions that excel in high-performance model serving, distributed AI workflows, and streamlined management of the complete machine learning lifecycle with minimal manual effort.


Top Skills Chief AI Officers Must Have to Succeed in Modern Workplace

Domain knowledge is obviously vital. Possessing an understanding of core AI concepts is a must. Machine learning (ML), data analytics, and software development are elementary requirements a capable CAIO will leverage for specific business goals. Given the incipient stage that AI transformation is at, candidates will have to supplement their knowledge with continuous learning, adaptability, and initiative. Notably, a CAIO must use their expertise to arrive at data-driven decisions—it sets a good professional apart and highlights their capacity to troubleshoot accurately. ... A CAIO must translate AI concepts into clear strategies, prioritizing among multiple potential implementations based on their judgment of what will deliver the greatest value. This involves setting concrete goals such as improved efficiency, enhanced customer engagement, or increased employee productivity, and devising a roadmap to achieve them. ... Beyond the technical knowledge and strategic acumen, a powerful grasp of how business processes work within an organisation and why they function the way they do is crucial. CAIOs must foremost align with this culture and find ways to integrate AI within that framework.


5 Ways to Keep Global Development Teams Productive

A significant challenge for global development teams is ensuring smooth collaboration between different locations. Without the right tools and processes, team members can experience delays due to time zone differences, slow data access, or inconsistent version control systems. To improve collaboration, development teams should implement systems that provide fast, reliable access to codebases, regardless of location. Real-time collaboration tools that synchronize work across global teams are essential. For instance, platforms that replicate repositories in real-time across different sites ensure that all team members are working with the latest version of the code, reducing the risk of inconsistencies. ... Compliance with data protection laws, such as the GDPR or CCPA, is also essential for companies working across borders. Development teams need to be mindful of where data is stored and ensure that their tools meet the necessary compliance requirements. Security policies should be applied consistently across all locations to prevent breaches and data leaks, which can lead to significant financial and reputational damage.



Quote for the day:

“Without continual growth and progress, such words as improvement, achievement, and success have no meaning.” -- Benjamin Franklin