Daily Tech Digest - October 13, 2024

Fortifying Cyber Resilience with Trusted Data Integrity

While it is tempting to put all of the focus on keeping the bad guys out, there is an important truth to remember: Cybercriminals are persistent and eventually, they find a way in. The key is not to try and build an impenetrable wall, because that wall does not exist. Instead, organizations need to have a defense strategy at the data level. By monitoring data for signs of ransomware behavior, the spread of the attack can be slowed or even stopped. It includes analyzing data and watching for patterns that indicate a ransomware attack is in progress. When caught early, organizations have the power to stop the attack before it causes widespread damage. Once an attack has been identified, it is time to execute the curated recovery plan. That means not just restoring everything in one action but instead selectively recovering the clean data and leaving the corrupted files behind. ... Trusted data integrity offers a new way forward. By ensuring that data remains clean and intact, detecting corruption early, and enabling a faster, more intelligent recovery, data integrity is the key to reducing the damage and cost of a ransomware attack. In the end, it’s all about being prepared. 


Regulating AI Catastophic Risk Isn't Easy

Catastrophic risks are those that cause a failure of the system, said Ram Bala, associate professor of business analytics at Santa Clara University's Leavey School of Business. Risks could range from endangering all of humanity to more contained impact, such as disruptions affecting only enterprise customers of AI products, he told Information Security Media Group. Deming Chen, professor of electrical and computer engineering at the University of Illinois, said that if AI were to develop a form of self-interest or self-awareness, the consequences could be dire. "If an AI system were to start asking, 'What's in it for me?' when given tasks, the results could be severe," he said. Unchecked self-awareness might drive AI systems to manipulate their abilities, leading to disorder, and potentially catastrophic outcomes. Bala said that most experts see these risks as "far-fetched," since AI systems currently lack sentience or intent, and likely will for the foreseeable future. But some form of catastrophic risk might already be here. Eric Wengrowski, CEO of Steg.AI, said that AI's "widespread societal or economic harm" is evident in disinformation campaigns through deepfakes and digital content manipulation. 


The Importance of Lakehouse Formats in Data Streaming Infrastructure

Most data scientists spend the majority of their time updating those data in a single format. However, when your streaming infrastructure has data processing capabilities, you can update the formats of that data at the ingestion layer and land the data in the standardized format you want to analyze. Streaming infrastructure should also scale seamlessly like Lakehouse architectures, allowing organizations to add storage and compute resources as needed. This scalability ensures that the system can handle growing data volumes and increasing analytical demands without major overhauls or disruptions to existing workflows. ... As data continues to play an increasingly central role in business operations and decision-making, the importance of efficient, flexible, and scalable data architectures will only grow. The integration of lakehouse formats with streaming infrastructure represents a significant step forward in meeting these evolving needs. Organizations that embrace this unified approach to data management will be better positioned to derive value from their data assets, respond quickly to changing market conditions, and drive innovation through advanced analytics and AI applications.


Open source culture: 9 core principles and values

Whether you’re experienced or just starting out, your contributions are valued in open source communities. This shared responsibility helps keep the community strong and makes sure the projects run smoothly. When people come together to contribute and work toward shared goals, it fuels creativity and drives productivity. ... While the idea of meritocracy is incredibly appealing, there are still some challenges that come along with it. In reality, the world is not fair and people do not get the same opportunities and resources to express their ideas. Many people face challenges such as lack of resources or societal biases that often go unacknowledged in "meritocratic" situations. Essentially, open source communities suffer from the same biases as any other communities. For meritocracy to truly work, open source communities need to actively and continuously work to make sure everyone is included and has a fair and equal opportunity to contribute. ... Open source is all about how everyone gets a chance to make an impact and difference. As mentioned previously, titles and positions don’t define the value of your work and ideas—what truly matters is the expertise, work and creativity you bring to the table.


How to Ensure Cloud Native Architectures Are Resilient and Secure

Microservices offer flexibility and faster updates but also introduce complexity — and more risk. In this case, the company had split its platform into dozens of microservices, handling everything from user authentication to transaction processing. While this made scaling more accessible, it also increased the potential for security vulnerabilities. With so many moving parts, monitoring API traffic became a significant challenge, and critical vulnerabilities went unnoticed. Without proper oversight, these blind spots could quickly become significant entry points for attackers. Unmanaged APIs could create serious vulnerabilities in the future. If these gaps aren’t addressed, companies could face major threats within a few years. ... As companies increasingly embrace cloud native technologies, the rush to prioritize agility and scalability often leaves security as an afterthought. But that trade-off isn’t sustainable. By 2025, unmanaged APIs could expose organizations to significant breaches unless proper controls are implemented today. ... As companies increasingly embrace cloud native technologies, the rush to prioritize agility and scalability often leaves security as an afterthought. But that trade-off isn’t sustainable. By 2025, unmanaged APIs could expose organizations to significant breaches unless proper controls are implemented today.


Focus on Tech Evolution, Not on Tech Debt

Tech Evolution represents a mindset shift. Instead of simply repairing the system, Tech Evolution emphasises continuous improvement, where the team proactively advances the system to stay ahead of future requirements. It’s a strategic, long-term investment in the growth and adaptability of the technology stack. Tech Evolution is about future-proofing your platform. Rather than focusing on past mistakes (tech debt), the focus shifts toward how the technology can evolve to accommodate new trends, user demands, and business goals. ... One way to action Tech Evolution is to dedicate time specifically for innovation. Development teams can use innovation days, hackathons, or R&D-focused sprints to explore new ideas, tools, and frameworks. This builds a culture of experimentation and continuous learning, allowing the team to identify future opportunities for evolving the tech stack. ... Fostering a culture of continuous learning is essential for Tech Evolution. Offering training programs, hosting workshops, and encouraging attendance at conferences ensures your team stays informed about emerging technologies and best practices. 


Singapore’s Technology Empowered AML Framework

Developed by the Monetary Authority of Singapore (MAS) in collaboration withhttps://cdn.opengovasia.com/wp-content/uploads/2024/10/Article_08-Oct-2024_1-Sing-1270-1.jpg six major banks, COSMIC is a centralised digital platform for global information sharing among financial institutions to combat money laundering, terrorism financing, and proliferation financing, enhancing defences against illicit activities. By pooling insights from different financial entities, COSMIC enhances Singapore’s ability to detect and disrupt money laundering schemes early, particularly when transactions cross international borders(IMC Report). Another significant collaboration is the Anti-Money Laundering/Countering the Financing of Terrorism Industry Partnership (ACIP). This partnership between MAS, the Commercial Affairs Department (CAD) of the Singapore Police Force, and private-sector financial institutions allows for the sharing of best practices, the issuance of advisories, and the development of enhanced AML measures. ... Another crucial aspect of Singapore’s AML strategy is the AML Case Coordination and Collaboration Network (AC3N). This new framework builds on the Inter-Agency Suspicious Transaction Reports Analytics (ISTRA) task force to improve coordination between all relevant agencies.


Future-proofing Your Data Strategy with a Multi-tech Platform

Traditional approaches that were powered by a single tool or two, like Apache Cassandra or Apache Kafka, were once the way to proceed. However, now used alone, these tools are proving insufficient to meet the demands of modern data ecosystems. The challenges presented by today’s distributed, real-time, and unstructured data have made it clear that businesses need a new strategy. Increasingly, that strategy involves the use of a multi-tech platform. ... Implementing a multi-tech platform can be complex, especially considering the need to manage integrations, scalability, security, and reliability across multiple technologies. Many organizations simply do not have the time or expertise in the different technologies to pull this off. Increasingly, organizations are partnering with a technology provider that has the expertise in scaling traditional open-source solutions and the real-world knowledge in integrating the different solutions. That’s where Instaclustr by NetApp comes in. Instaclustr offers a fully managed platform that brings together a comprehensive suite of open-source data technologies. 


Strong Basics: The Building Blocks of Software Engineering

It is alarmingly easy to assume a “truth” on faith when, in reality, it is open to debate. Effective problem-solving starts by examining assumptions because the assumptions that survive your scrutiny will dictate which approaches remain viable. If you didn’t know your intended plan rested on an unfounded or invalid assumption, imagine how disastrously it would be to proceed anyway. Why take that gamble? ... Test everything you design or build. It is astounding how often testing gets skipped. A recent study showed that just under half of the time, information security professionals don’t audit major updates to their applications. It’s tempting to look at your application on paper and reason that it should be fine. But if everything worked like it did on paper, testing would never find any issues — yet so often it does. The whole point of testing is to discover what you didn’t anticipate. Because no one can foresee everything, the only way to catch what you didn’t is to test. ... companies continue to squeeze out more productivity from their workforce by adopting the cutting-edge technology of the day, generative AI being merely the latest iteration of this trend. 


The resurgence of DCIM: Navigating the future of data center management

A significant factor behind the resurgence of DCIM is the exponential growth in data generation and the requirement for more infrastructure capacity. Businesses, consumers, and devices are producing data at unprecedented rates, driven by trends such as cloud computing, digital transformation, and the Internet of Things (IoT). This influx of data has created a critical demand for advanced tools that can offer comprehensive visibility into resources and infrastructure. Organizations are increasingly seeking DCIM solutions that enable them to efficiently scale their data centers to handle this growth while maintaining optimal performance. ... Modern DCIM solutions, such as RiT Tech’s XpedITe, also leverage AI and machine learning to provide predictive maintenance capabilities. By analyzing historical data and identifying patterns, it will predict when equipment is likely to fail and automatically schedule maintenance ahead of any failure as well as providing automation of routine tasks such as resource allocations. As data centers continue to grow in size and complexity, effective capacity planning becomes increasingly important. DCIM solutions provide the tools needed to plan and optimize capacity, ensuring that data center resources are used efficiently and that there is sufficient capacity to meet future demand.



Quote for the day:

“Too many of us are not living our dreams because we are living our fears.” -- Les Brown

Daily Tech Digest - October 12, 2024

Technically skilled managers can be great leaders, if they do this

Outlier leaders often struggle with delegation because they hold themselves—and their work—to impossibly high standards. To ensure that everything is perfect, you may take on too much, which leads to burnout and stifles your team’s potential. This approach drains your energy and can cause your team to feel undervalued or micromanaged. Building trust in your team is crucial for leveraging their strengths and empowering them to contribute to the organization’s success. Trust isn’t just about assuming your team will do their job; it’s about giving them the freedom to innovate, make mistakes, and grow. When your team feels trusted, they are more likely to take ownership of their work and deliver results that align with your vision. Start by delegating smaller tasks and gradually increase the responsibility you give to team members. ... Fostering a culture where team members feel comfortable sharing their thoughts, ideas, and concerns is key to maintaining strong team cohesion. It also helps outlier leaders stay connected with their teams, giving them a better understanding of what’s working and what’s not. Make communication a priority by holding regular team meetings focused not just on project updates but on sharing feedback, discussing challenges, and exploring new ideas.


6 biggest healthcare security threats

Traffic from bad bots — such as those that attempt to scrape data from websites, send spam, or download unwanted software — present another major challenge for healthcare organizations. The problem has become especially pressing when governments around the world began setting up new websites and other digital infrastructure to support COVID vaccine registrations and appointments. Bad actors bombarded these new, hastily established and largely untested sites with a huge volume of bad-bot traffic. Imperva says it has observed a 372% increase in bad-bot traffic on healthcare websites in the first year of the pandemic. “Increased levels of traffic result in downtime and disruption for legitimate human users who are trying to access critical services on their healthcare providers’ site,” Ray says. “It might also result in increased infrastructure costs for the organization as it tries to sustain uptime from the persistent, burdensome level of elevated traffic.” ... Wearable and implantable smart medical devices are a proven cybersecurity risk. These technologies certainly offer better analysis, assisting diagnosis of medical conditions while aiding independent living, but mistakes made in securing such medtech have exposed vulnerable users to potential attack.


Cybercriminals Are Targetting AI Conversational Platforms

Besides the issue of retained PII stored in communications between the AI agent and end users, bad actors were also able to target access tokens, which could be used by enterprises for the implementation of the service with APIs of external services and applications. According to Resecurity, due to the significant penetration of external AI systems into enterprise infrastructure and the processing of massive volumes of data, their implementation without proper risk assessment should be considered an emerging IT supply chain cybersecurity risk. The experts from Resecurity outlined the need for AI trust, risk, and security management (TRiSM), as well as Privacy Impact Assessments (PIAs) to identify and mitigate potential or known impacts that an AI system may have on privacy, as well as increased attention to supply chain cybersecurity. Conversational AI platforms have already become a critical element of the modern IT supply chain for major enterprises and government agencies. Their protection will require a balance between traditional cybersecurity measures relevant to SaaS (Software-as-a-Service) and those specialized and tailored to the specifics of AI, highlighted the threat research team at Resecurity.


Leveraging digital technologies for intralogistics optimisation

With rapid advances in digital technologies such as robotics, artificial intelligence (AI) and the Internet of Things (IoT), companies can now optimise their intralogistics processes more easily and achieve better results. The following sections focus on a range of key aspects that can be significantly improved by leveraging digital technologies in support of intralogistics. ... Wearables are often used in warehouses to optimise worker movements, enable picking and packing efficiency, and ensure worker safety. Wearables equipped with augmented reality technology can be used for navigation and guiding employees for picking and packing operations. These techniques yield benefits not only by speeding up processes and reducing errors but also by reducing employee training time. ... The overall benefits of intralogistics optimisation are significant in terms of process efficiency, but securing the benefits depends on a range of sub-domains as described above. These technologies enhance visibility of operations by enabling real-time monitoring at every stage of production, right from the raw material supply stage up to the final delivery of the manufactured goods stage, which in turn can enhance efficiency and reduce downtime and production cost. 


Do we even need Product Managers?

The brand manager of the past was in many ways the product manager of the present. And not long after we began our discussion, Shreyas brought up the rather famous example of Procter and Gamble’s ‘Brand Man’ memo. ... And then, as tech gave businesses the ability to slice and dice consumer categories and cohorts and track multiple data points, the more ‘artistic’ PMs—those who made decisions based on intuition and taste—had to find patterns that would align with what the data suggested. But consumer data isn’t absolute truth. Humans are irrational. They might do one thing, and say something completely different in a survey. Or a feedback form. And as Chandra outlines, most companies are drowning in data now, with no clue what to do with all the metrics they track. ... When do you hire your first Product Manager? This is a question that an increasing number of CEOs and founders are asking in the post-ZIRP era—where efficiency is key and AI is pushing into more and more functions. So what happens when you sit down and ask yourself, “Do I need to hire product managers?” ... The one certainty that emerged from our discussion was that the role of a Product Manager has to evolve and break out of the boundaries it is now enclosed in, although some skills and characteristics will remain constant.


Scaling Uber’s Batch Data Platform: A Journey to the Cloud with Data Mesh Principles

One significant challenge that Uber has faced during the migration process is the need to accommodate changes in data ownership and the limits set by GCS. Data ownership changes can occur due to team reorganizations or users reassigning assets. To address this, Uber implemented an automated process to monitor and reassign ownership when necessary, ensuring data remains securely stored and managed. Additionally, Uber optimized its data distribution to avoid hitting GCS storage limits, ensuring that heavily used tables are separated into their buckets to improve performance and make monitoring easier. ... Looking to the future, Uber aims to further expand on its use of data mesh principles by building a platform that allows for self-governed data domains. This will simplify infrastructure management and enhance data governance, ultimately creating a more agile, secure, and cost-efficient data ecosystem. The cloud migration of Uber’s batch data platform is a significant undertaking, but through careful planning and the development of innovative tools like DataMesh, Uber is positioning itself for greater scalability, security, and operational efficiency in the cloud.


ShadowLogic Attack Targets AI Model Graphs to Create Codeless Backdoors

By using the ShadowLogic method, HiddenLayer says, threat actors can implant codeless backdoors in ML models that will persist across fine-tuning and which can be used in highly targeted attacks. Starting from previous research that demonstrated how backdoors can be implemented during the model’s training phase by setting specific triggers to activate hidden behavior, HiddenLayer investigated how a backdoor could be injected in a neural network’s computational graph without the training phase. “A computational graph is a mathematical representation of the various computational operations in a neural network during both the forward and backward propagation stages. In simple terms, it is the topological control flow that a model will follow in its typical operation,” HiddenLayer explains. Describing the data flow through the neural network, these graphs contain nodes representing data inputs, the performed mathematical operations, and learning parameters. “Much like code in a compiled executable, we can specify a set of instructions for the machine (or, in this case, the model) to execute,” the security company notes.


What Goes Into AI? Exploring the GenAI Technology Stack

Compiling the training datasets involves crawling, compiling, and processing all text (or audio or visual) data available on the internet and other sources (e.g., digitized libraries). After compiling these raw datasets, engineers layer in relevant metadata (e.g., tagging categories), tokenize data into chunks for model processing, format data into efficient training file formats, and impose quality control measures. While the market for AI model-powered products and services may be worth trillions within a decade, many barriers to entry prevent all but the most well-resourced companies from building cutting-edge models. The highest barrier to entry is the millions to billions of capital investment required for model training. To train the latest models, companies must either construct their own data centers or make significant purchases from cloud service providers to leverage their data centers. While Moore’s law continues to rapidly lower the price of computing power, this is more than offset by the rapid scale up in model sizes and computation requirements. Training the latest cutting-edge models requires billions in data center investment.


Navigating the Challenges of Hybrid IT Environments in the Age of Cloud Repatriation

Cloud repatriation can often create challenges of its own. The costs associated with moving services back on-prem can be significant: New hardware, increased maintenance, and energy expenses should all be factored in. Yet, for some, the financial trade-off for repatriation is worth it, especially if cloud expenses become unsustainable or if significant savings can be achieved by managing resources partially on-prem. Cloud repatriation is a calculated risk that, if done for the right reasons and executed successfully, can lead to efficiency and peace of mind for many companies. ... Hybrid cloud observability tools empower organizations to track performance, boost productivity, ensure system health, and swiftly resolve issues, leading to reduced downtime, fewer outages, and enhanced service availability for both employees and customers. By enhancing transparency and intelligence, observability tools ultimately strengthen the resilience of the entire IT infrastructure — no matter what path a company takes regarding the cloud. When deciding which workloads to move back on-prem versus which to keep in the cloud, companies should carefully consider their specific needs, such as cost constraints, performance requirements, and compliance obligations.


Solve People Silo Problems to Solve Data Silo Problems

According to Farooq, a critical aspect of any Chief Data Officer’s role is ensuring that data is both accessible and fit for purpose, supporting everything from revenue generation to regulatory compliance and risk management. Achieving these goals requires a robust data strategy that is closely aligned with the overall business strategy. Ultimately, it's all about the data. Reflecting on the challenges of integrating GenAI into institutions, particularly in managing unstructured data, Farooq likens the AI journey to Gartner's hype cycle, highlighting how innovations initially peak with inflated expectations before experiencing a period of disillusionment. However, unlike trends such as blockchain, Farooq believes AI is here to stay and will follow a different trajectory, leading to lasting productivity gains. From a data governance perspective, Farooq sees immediate practical applications for GenAI, such as creating synthetic test data for market scenarios and writing data quality rules. He emphasizes the importance of democratizing AI across all levels of an organization, similar to how data literacy became crucial for everyone—from CEOs to marketers. 



Quote for the day: 
"The whole point of getting things done is knowing what to leave undone." -- Lady Stella Reading

Daily Tech Digest - October 11, 2024

The Power of Going Opposite: Unlocking Hidden Opportunities in Business

It is more than just going opposite directions. This is rooted in a principle I call the Law of Opposites. With this, you look in the opposite direction everyone else is looking and what you will find are unique opportunities in the most inconspicuous places. By leveraging this law, business leaders and whole organizations can uncover hidden opportunities and create significant competitive advantage they would otherwise miss. Initially, many fear that looking in the opposite direction will leave them in uncharted territory unnecessarily. For instance, why would a restaurant look at what is currently going on in the auto industry? That should do quite the opposite, right? This principle of going opposite has the opposite effect of that fear, revealing unexpected opportunities! With an approach that places organizations on opposite sides of conventional thinking, you take on a new viewpoint. ... Leveraging the Law of Opposites and putting in the effort to go opposite has two critical benefits to organizations. For one, when faced with what appears to be insurmountable competition, it allows organizations to leap ahead, pivoting their offerings to see what others miss.


Top 5 Container Security Mistakes and How to Avoid Them

Before containers are deployed, you need assurance they don’t contain vulnerabilities right from the start. But unfortunately, many organizations fail to scan container images during the build process. That leaves serious risks lurking unseen. Unscanned container images allow vulnerabilities and malware to easily slip into production environments, creating significant security issues down the road. ... Far too often, developers demand (and receive) excessive permissions for container access, which trailblazes unnecessary risks. If compromised or misused, overprivileged containers can lead to devastating security incidents. ... Threat prevention shouldn’t stop once a container launches, either. But some forget to extend protections during the runtime phase. Containers left unprotected at runtime allow adversarial lateral movement across environments if compromised. ... Container registries offer juicy targets when left unprotected. After all, compromise the registry, and you will have the keys to infect every image inside. Unsecured registries place your entire container pipeline in jeopardy if accessed maliciously. ... You can’t protect what you can’t see. Monitoring gives visibility into container health events, network communications, and user actions.


Why you should want face-recognition glasses

Under the right circumstances, we can easily exchange business card-type information by simply holding the two phones near each other. To give a business card is to grant permission for the receiver to possess the personal information thereon. It would be trivial to add a small bit of code to grant permission for face recognition. Each user could grant that permission with a checkbox in the contacts app. That permission would automatically share both the permission and a profile photo. Face-recognition permission should be grantable and revokable at any time on a person-by-person basis. Ten years from now (when most everyone will be wearing AI glasses), you could be alerted at conferences and other business events about everyone you’ve met before, complete with their name, occupation, and history of interaction. Collecting such data throughout one’s life on family and friends would also be a huge benefit to older people suffering from age-related dementia or just from a naturally failing memory. Shaming AI glasses as a face-recognition privacy risk is the wrong tactic, especially when the glasses are being used only a camera. Instead, we should recognize that permission-based face-recognition features in AI glasses would radically improve our careers and lives.
 

Operationalize a Scalable AI With LLMOps Principles and Best Practices

The recent rise of Generative AI with its most common form of large language models (LLMs) prompted us to consider how MLOps processes should be adapted to this new class of AI-powered applications. LLMOps (Large Language Models Operations) is a specialized subset of MLOps (Machine Learning Operations) tailored for the efficient development and deployment of large language models. LLMOps ensures that model quality remains high and that data quality is maintained throughout data science projects by providing infrastructure and tools. Use a consolidated MLOps and LLMOps platform to enable close interaction between data science and IT DevOps to increase productivity and deploy a greater number of models into production faster. MLOps and LLMOps will both bring Agility to AI Innovation to the project. ... Evaluating LLMs is a challenging and evolving domain, primarily because LLMs often demonstrate uneven capabilities across different tasks. LLMs can be sensitive to prompt variations, demonstrating high proficiency in one task but faltering with slight deviations in prompts. Since most LLMs output natural language, it is very difficult to evaluate the outputs via traditional Natural Language Processing metrics. 


Using Chrome's accessibility APIs to find security bugs

Chrome exposes all the UI controls to assistive technology. Chrome goes to great lengths to ensure its entire UI is exposed to screen readers, braille devices and other such assistive tech. This tree of controls includes all the toolbars, menus, and the structure of the page itself. This structural definition of the browser user interface is already sometimes used in other contexts, for example by some password managers, demonstrating that investing in accessibility has benefits for all users. We’re now taking that investment and leveraging it to find security bugs, too. ... Fuzzers are unlikely to stumble across these control names by chance, even with the instrumentation applied to string comparisons. In fact, this by-name approach turned out to be only 20% as effective as picking controls by ordinal. To resolve this we added a custom mutator which is smart enough to put in place control names and roles which are known to exist. We randomly use this mutator or the standard libprotobuf-mutator in order to get the best of both worlds. This approach has proven to be about 80% as quick as the original ordinal-based mutator, while providing stable test cases.


Investing in Privacy by Design for long-term compliance

Organizations still have a lot of prejudice when discussing principles like Privacy by Design which comes from the lack of knowledge and awareness. A lot of organizations which are handling sensitive private data have a dedicated Data Protection Officer only on paper, and that person performing the role of the DPO is often poorly educated and misinformed regarding the subject. Companies have undergone a shallow transformation and defined the roles and responsibilities when the GDPR was put into force, often led by external consultants, and now those DPO’s in the organizations are just trying to meet the minimum requirements and hope everything turns out for the best. Most of the legacy systems in companies were ‘taken care of’ during these transformations, impact assessments were made, and that was the end of the discussion about related risks. For adequate implementation of principles like Privacy by Design and Security by Design, all of the organization has to be aware that this is something that has to be done, and support from all the stakeholders needs to be ensured. By correctly implementing Privacy by Design, privacy risks need to be established at the beginning, but also carefully managed until the end of the project, and then periodically reassessed. 


Benefits of a Modern Data Historian

With Industry 4.0, data historians have advanced significantly. They now pull in data from IoT devices and cloud platforms, handling larger and more complex datasets. Modern historians use AI and real-time analytics to optimize operations across entire businesses, making them more scalable, secure, and integrated with other digital systems, perfectly fitting the connected nature of today’s industries. Traditional data historians were limited in scalability and integration capabilities, often relying on manual processes and statistical methods of data collection and storage. Modern data historians, particularly those built using a time series database (TSDB), offer significant improvements in speed and ease of data processing and aggregation. One such foundation for a modern data historian is InfluxDB. ... Visualizing data is crucial for effective decision-making as it transforms complex datasets into intuitive, easily understandable formats. This lets stakeholders quickly grasp trends, identify anomalies, and derive actionable insights. InfluxDB seamlessly integrates with visualization tools like Grafana, renowned for its powerful, interactive dashboards.


Beyond Proof of Concepts: Will Gen AI Live Up to the Hype?

Two years after ChatGPT's launch, the experimental stage is largely behind CIOs and tech leaders. What once required discretionary funding approval from CFOs and CEOs has now evolved into a clear recognition that gen AI could be a game changer. But scaling this technology across multiple business use cases while aligning them with strategic objectives - without overwhelming users - is a more practical approach. Still, nearly 90% of gen AI projects remain stuck in the pilot phase, with many being rudimentary. According to Gartner, one major hurdle is justifying the significant investments in gen AI, particularly when the benefits are framed merely as productivity enhancements, which may not always translate into tangible financial gains. "Many organizations leverage gen AI to transform their business models and create new opportunities, yet they continue to struggle with realizing value," said Rita Salaam, distinguished vice president analyst at Gartner. ... In another IBM survey, tech leaders revealed that half of their IT budgets will be allocated to AI and cloud over the next two years. This shift suggests that gen AI is transitioning from the "doubt" phase to the "confidence" phase.


Microsoft’s Take on Kernel Access and Safe Deployment Following CrowdStrike Incident

This was discussed at some length at the MVI summit. “We face a common set of challenges in safely rolling out updates to the large Windows ecosystem, from deciding how to do measured rollouts with a diverse set of endpoints to being able to pause or rollback if needed. A core SDP principle is gradual and staged deployment of updates sent to customers,” comments Weston in a blog on the summit. “This rich discussion at the Summit will continue as a collaborative effort with our MVI partners to create a shared set of best practices that we will use as an ecosystem going forward,” he blogged. Separately, he expanded to SecurityWeek: “We discussed ways to de-conflict the various SDP approaches being used by our partners, and to bring everything together as a consensus on the principles of SDP. We want everything to be transparent, but then we want to enforce this standard as a requirement for working with Microsoft.” Agreeing and requiring a minimum set of safe deployment practices from partners is one thing; ensuring that those partners employ the agreed SDP is another. “Technical enforcement would be a challenge,” he said. “Transparency and accountability seem to be the best methodology for now.”


What Hybrid Quantum-Classic Computing Looks Like

Because classical and quantum computers have limitations, the two are being used as a hybrid solution. For example, a quantum computer is an accelerator for a classical computer and classical computers can control quantum systems. However, there are challenges. One challenge is that classical computers and quantum computers operate at different ambient temperatures, which means a classical computer can’t run in a near zero Kelvin environment, nor can a quantum computer operate in a classical environment. Therefore, separating the two is necessary. Another challenge is that quantum computers are very noisy and therefore error prone. To address that issue, Noisy Intermediate-Scale Quantum or NISQ computing emerged. The assumption is that one must just accept the errors and create variational algorithms. In this vein, one guesses what a solution looks like and then attempts to tweak the parameters of it using something like Stochastic gradient descent, which is used to train neural networks. Using a hybrid system, the process is iterative. The classical computer measures the state of the of qubits, analyzes them and sends instructions for what to do next. This is how the classical-quantum error correction iterations work at a high level.



Quote for the day:

"Facing difficult circumstances does not determine who you are. They simply bring to light who you already were." -- Chris Rollins

Daily Tech Digest - October 08, 2024

How To Secure Microservices in a Multicloud Architecture

In a microservices architecture, each service operates independently, allowing updates, maintenance and modifications without disrupting others. This isolation should extend across infrastructure layers, including databases, ensuring no service can access another’s data. Full isolation prevents attackers from moving laterally within the system. ... Sensitive data, such as passwords or personal information, should never be exposed in plain text or storage. Users and automated systems can easily access this information making it vulnerable to threats. Businesses should always remove or mask this information before storing it in any records. Practices like TLS/HTTPS or encrypting logs are not enough, since one caters to securing data in transit while the other secures data at rest. Hence, the best way is to stop storing sensitive information altogether. ... Zero trust security works on the idea that no user or device should be trusted by default, whether inside or outside the network. By using the zero trust model, businesses can make sure every user and device is constantly authenticated and authorized, no matter where they are. In microservices, this means checking every interaction between services, enforcing strict access controls and logging all actions. 


The road to Industry 5.0 is your data and AI

When Industry 5.0 emerges, we can expect to see the convergence of all that work and collected data. The next industrial revolution will be steeped in bridging the physical and the digital realms. Effectively this goes back to that human versus machine argument, but optimizing both human and machine to enhance their capabilities. AI and cloud computing will reach a harmony where workers can produce their best results, which can be replicated in processes throughout the supply chain. Industrial AI powers our lives in the back end. Industrial AI capabilities will enable power decision-making, and won't be a force for contention despite speculation. ... From the regulatory complexities of data collection and storage to varying levels of AI adoption within businesses, a successful transition into Industry 5.0 requires expert support. Costs of AI investments can snowball, so you must be strategic and targeted at improving specific areas of your business. Generic, off-the-shelf AI tools trained on irrelevant data won’t help here. To remain competitive at a global scale, companies need to invest in this technology and work with proven partners.


Why we’re teaching LLMs to forget things

Selective forgetting, something that humans are all too good at, turns out to be exceptionally difficult to recreate in machine learning models. That’s especially true for a class of AI models known as foundation models that may have picked up personal, copyrighted, or toxic information buried in their training data. ... “True unlearning tries to remove all vestiges of the unwanted information, so that when the model gets a problematic question, it simply doesn’t have the answer,” she added. “A model that has ‘unlearned’ insulting behavior no longer knows how to be toxic.” Ideally, unlearning also comes with a mathematical guarantee that the unwanted data’s influence on the model has been erased. Achieving that gold standard, however, typically involves retraining the model, which for LLMs can be prohibitively expensive. One option for unlearning without guarantees is to fine-tune the model on the unwanted data using an optimization technique known as gradient ascent to forget connections between data points. “Using gradient ascent to update the model’s weights is like running the model’s training in reverse,” said Swanand Ravindra Kadhe, a senior research scientist at IBM Research focused on unlearning. 


Will IPv6 ever replace IPv4?

The year is 2024 though, and the internet still runs on IPv4. So where did it all go wrong? IPv6 has been in migration hell for decades, with every kind of possible initiative to improve IPv6 adoption falling flat, from an official World IPv6 Day in 2011, the World IPv6 'launch' in 2012, and several US Federal government action plans in 2005, 2010, and 2020 (including mandating IPv6 readiness for government networks - a deadline initially set at 2012 and now extended to 2025). There have been numerous incentives for schools and businesses, promotional campaigns from registries and ISPs, conferences, and education campaigns. ... Another serious problem that's faced IPv6 adoption is NAT. NAT is a technology which was designed in 1994 to reduce the number of global IPv4 addresses needed. It allows devices on a private network to share a single IP address, and is present in almost all home routers (and has been for decades). NAT is the reason why your computer has an 'internal' IP address, and needs port forwarding to be accessible directly from the internet (firewall aside). NAT has allowed us to continue to grow the number of devices online well past the exhaustion point of IPv4 to a whopping 30 billion devices.


How the increasing demand for cyber insurance is changing the role of the CISO

Despite CISOs overseeing cybersecurity and the controls meant to blunt cyber risk, they have not historically been the executives who decide whether their organization buys cyber insurance. Instead, CFOs or chief risk officers typically make the call and determine what levels of protection to buy. However, CISOs are taking on larger roles — as they should — in those discussions and the decision-making process because they’re well-positioned to understand the threat landscape, the types of threats that could impact them, and how each one could impact the organization, says Paul Caron, Head of Cybersecurity, Americas at S-RM, a global corporate intelligence and cyber security consultancy. Generally speaking, CISOs are also best positioned to share the organization’s cybersecurity strategy and details of its security controls with insurance brokers or carriers, Caron says. “CISOs are the ones who can best tell their story.” And CISOs are best positioned to review the resources that a selected insurance company would possess to respond to an event and whether those resources would be the best choices. 


Many C-suite execs have lost confidence in IT, including CIOs

Many C-suite executives want the IT team to both keep the systems running and drive strategic innovation, he says, a challenging balance act. “Organizations perceive IT as struggling to meet these demands, particularly in deploying new technologies like AI, which have raised expectations among business leaders,” he says. “Challenges in managing legacy systems and ongoing talent shortages further exacerbate this issue.” In many cases, the traditional IT team has been separated from the R&D team, with the IT teams tasked with keeping the lights on, some tech leaders say. With IT and business strategies getting more intertwined, and the hard truths involved in that, the value traditionally driven by IT has shifted to product engineering and business units, says Martin Mao, CEO and co-founder of Chronosphere, a cloud observability platform. “The value is not seen in keeping the wheels on the bus,” he says. “IT is stuck in a negative spiral of cost cutting and defense mode versus innovation. There is a huge talent drain occurring from IT to the product engineering side of the house.” IT teams are often burdened with maintaining legacy systems while simultaneously asked to support new technologies such as AI, infrastructure as code, containerization, and cloud services, adds Kenny Van Alstyne


5 ways data scientists can prepare now for genAI transformation

Data scientists have traditionally developed dashboards as quick and easy ways to learn about new data sets or to help business users answer questions about their data. While data visualization and analytics platforms have added natural language querying and machine learning algorithms over the last several years, data scientists should anticipate a new wave of genAI-driven innovations. ... “With generative AI, the reliance on traditional dashboards diminishes as users can remove the noise of the analytics and get to actionable insights conversationally. Freed from ad-hoc dashboard-generation, data analysts and data scientists will concentrate on documenting organizational knowledge into semantic layers and conducting strategic analytics, creating a virtuous cycle.” Another prediction comes from Jerod Johnson, senior technology evangelist at CData, saying, “As genAI platforms become integrated into visualization tools, they enable more dynamic and interactive representations of data, allowing for real-time synthesis and scenario analysis. Over the next few years, data scientists can expect these tools to evolve to make visualizations more intuitive and insightful, even answering unasked questions for innovative discoveries.”


Can Responsible AI Live In The Cloud?

There are both benefits to be realised and pitfalls to be avoided when migrating AI to the cloud. Cloud providers offer high-spec, affordable infrastructure, often with better security arrangements than on-premises systems can provide – not to mention the capability to handle routine patching and updates. But there are a number of other factors to be mindful of, including: – Sovereignty: In many cases, it doesn’t matter where models are being trained, data transfer fees permitting. Compute in one area may be significantly cheaper than in another, but if you’re moving data to another country it’s important to understand how data will be handled there, including any differences in governmental or security process. – Sustainability: It’s also important to know how sustainable and power-efficient your AI cloud partner is, particularly if you’re transferring data to another country. Some countries have very good renewable energy mixes – but others don’t, and some datacentres are intrinsically more efficient than others. Do remember that your AI cloud provider will form part of your scope 3 emissions, so it pays to do your due diligence, particularly since AI can be very power hungry. – Suitability: The kind of data that your AI system is processing will have an impact on the kind of environment that it needs. 


The role of self-sovereign identity in enterprises

By allowing users to selectively share identity attributes, SSI mitigates the risk of overexposure of personal data. This is particularly important in industries like healthcare, financial services, and government, where stringent regulations such as GDPR, HIPAA, and CCPA dictate how personal information is managed. Passwords and traditional authentication methods have long been weak links in enterprise security, and a source of user friction. SSI can eliminate the need for passwords by enabling secure, passwordless authentication via verifiable credentials. This reduces the friction for users while maintaining high security standards. SSI can also improve customer satisfaction by simplifying secure access to services. For enterprises, SSI can also drive efficiency. By decentralizing identity verification, businesses can reduce their reliance on third-party identity providers, cutting costs and minimizing the delays associated with identity proofing processes. SSI’s interoperability across platforms and services ensures that enterprises can implement a single identity solution that works across a wide variety of use cases, from onboarding employees to authenticating customers and partners.


Reachability and Risk: Prioritizing Protection in a Complex Security Landscape

Despite the benefits, most organizations lack the tools and processes to analyze reachability across their infrastructure. Most are limited to a few common approaches with known downsides. External vulnerability scanners provide limited visibility into internal networks. Penetration testing typically focuses on external attack surfaces. And, manual analysis is incredibly time-consuming and error-prone. Achieving comprehensive reachability analysis is challenging, especially for large environments with tens of thousands of assets, as it’s difficult to compute all the states that a system might reach during operation. ... To address these challenges, organizations should leverage network digital twin technology. A sophisticated network digital twin collects L2-L7 state and configuration data across all network devices (load balancers, routers, firewalls and switches). This data is then used to create an accurate topology (on-prem and multi-cloud), calculate all possible paths within the network, analyze detailed behavioral information and make network configuration and behavior searchable and verifiable. Creating an accurate digital replica of an organization’s network infrastructure allows for automated analysis of potential attack paths and reachability between assets.



Quote for the day:

"Leadership is practices not so much in words as in attitude and in actions." -- Harold Geneen

Daily Tech Digest - October 07, 2024

AI Agents: The Intersection of Tool Calling and Reasoning in Generative AI

Building robust and reliable agents requires overcoming many different challenges. When solving complex problems, an agent often needs to balance multiple tasks at once including planning, interacting with the right tools at the right time, formatting tool calls properly, remembering outputs from previous steps, avoiding repetitive loops, and adhering to guidance to protect the system from jailbreaks/prompt injections/etc. Too many demands can easily overwhelm a single agent, leading to a growing trend where what may appear to an end user as one agent, is behind the scenes a collection of many agents and prompts working together to divide and conquer completing the task. This division allows tasks to be broken down and handled in parallel by different models and agents tailored to solve that particular piece of the puzzle. It’s here that models with excellent tool calling capabilities come into play. While tool-calling is a powerful way to enable productive agents, it comes with its own set of challenges. Agents need to understand the available tools, select the right one from a set of potentially similar options, format the inputs accurately, call tools in the right order, and potentially integrate feedback or instructions from other agents or humans.


Transforming cloud security with real-time visibility

Addressing the visibility problem first, enables security teams to understand real risk and fix misconfigurations across the organization much faster. As an example, we encounter many teams that face the same misconfiguration across hundreds of assets owned by thousands of developers. Without the right visibility into assets’ behavior, organizations have to go through every individual team, explain the risk, check if their workload actually utilizes the misconfiguration, and then configure it accordingly – essentially an impossible task. With runtime insights, security teams immediately understand what specific assets utilize the misconfigurations, which developers own them, and all the relevant risk contexts around them. This takes what could be a 6-month long project involving the whole R&D org into a simple task completed in a day and involving a few individuals. ... One of the top challenges organizations face is maintaining consistent compliance across various cloud environments, especially when those environments are highly dynamic and deployed by multiple stakeholders who don’t necessarily have the right expertise in the space. The solution lies in taking a dual approach.


Patrolling the Micro-Perimeter to Enhance Network Security

As companies move into industrial automation, remote retail sites, remote engineering, etc., the systems and applications used by each company group may need to be sequestered from corporate-wide employee access so that only those users authorized to use a specific system or application can gain access. From a network perspective, segments of the network, which become internal network micro security peripheries, surround these restricted access systems and applications, so they are only available for the users and user devices that are authorized to use them. Multi-factor security protocols are used to strengthen user signons, and network monitoring and observability software polices all activity at each network micro-periphery. The mission of a zero-trust network is to "trust no one," not even company employees, with unlimited access to all network segments, systems, and applications. This is in contrast to older security schemes that limited security checks and monitoring to the external periphery of the entire enterprise network but that didn't apply security protocols to micro-segments within that network. 


CIO intangibles: 6 abilities that set effective IT execs apart

Change leadership is different, and it’s very much a CIO-level skill, she says. “Change leadership is inspiring and motivating you to want to make the change. It’s much more about communication. It’s about navigating the different parts of the organization. It’s co-leading.” It’s one thing, she says, for an IT leader or a change management team to tell users, “This is what we’re doing and why we’re doing it.” It’s at a whole other level to have a business leader say, “Hey team, we’re next. This is what we’re doing. This is why it’s important and here are my expectations of you.” That’s what effective change leadership can accomplish. ... For critical thinking, CIOs need another intangible skill: the ability to ask the right questions. “It’s the whole idea of being more curious,” says Mike Shaklik, partner and global head of CIO advisory at Infosys Consulting. “The folks who can listen well, and synthesize while they listen, ask better questions. They learn to expect better answers from their own people. If you add intentionality to it, that’s a game-changer.” ... “In today’s environment, a lot of technology work does not happen inside of the IT organization,” Struckman says. “Yet leadership expects the CIO to understand how it all makes sense together.”


Building an Internal Developer Platform: 4 Essential Pillars

Infrastructure as Code (IaC) is the backbone of any modern cloud native platform. It allows platform engineering teams to manage and provision infrastructure (such as compute, storage and networking resources) programmatically using code. IaC ensures that infrastructure definitions are version-controlled, reusable and consistent across different environments. ... Security, governance and compliance are integral to managing modern infrastructure, but manual policy enforcement doesn’t scale well and can create bottlenecks. Policy as Code (PaC) helps solve this challenge by programmatically defining governance, security and operational policies. These policies are automatically enforced across cloud environments, Kubernetes clusters and CI/CD pipelines. Essentially, they “shift down security” into the platform. ... GitOps is an operational model where all system configurations, including application deployments, infrastructure and policies, are managed through Git repositories. By adopting GitOps, platform teams can standardize how changes are made and ensure that the actual system state matches the desired state defined in Git.


Chief risk storyteller: How CISOs are developing yet another skill

Creating a compelling narrative is also important to bolster the case for investment in the cybersecurity program, when it comes to restructuring or starting a new program it becomes very important. Hughes estimates the base set of requirements in the Center for Internet Security Controls Framework is a $2 to $3 million expense. “That’s a massive expense, so that storytelling and dialogue between you and the rest of the company to create that new, forward expense is significant,” he says. However, just as some stories have their skeptics, CISOs also need to be able to defend their risk story, particularly when there’s big dollars attached to it. De Lude has found it can be helpful to stress test the story or presentation with challenge sessions. “I might invite different people to a run through and explain the concept and ask for potential objections to test and develop a robust narrative,” she says. De Lude has found that drawing on internal expertise of people with strong communications skills can help learn how to project a story in a way that’s compelling. “Having someone lend support who wasn’t a cyber expert but knew how to really convey a strong message in all sorts of different ways was a gamer change,” she says.


The Disruptive Potential of On-Device Large Language Models

On-device personal AI assistants transform each device into a powerful companion that mimics human interaction and executes complex tasks. These AI assistants can understand context and learn about their owner's preferences, allowing them to perform a wide range of activities — from scheduling appointments to creative writing — even when offline. By operating directly on the user's device, these AI assistants ensure privacy and fast response times, making them indispensable for managing both routine and sophisticated tasks with ease and intelligence. ... Voice control for devices is set to become significantly more powerful and mainstream, especially with advancements in on-device large language models. Companies like FlowVoice are already paving the way, enabling near-silent voice typing on computers. ... On-device AI therapists have the potential to become mainstream due to their ability to offer users both privacy and responsive, engaging conversations. By operating directly on the user's device, these AI therapists ensure that sensitive data remains private and secure, minimizing the risk of breaches associated with cloud-based services.


Why cloud computing is losing favour

There are various reasons behind this trend. “In the early days, cloud repatriations were often a response to unsuccessful migrations; now they more often reflect changes in market pricing,” says Adrian Bradley, head of cloud transformation at KPMG UK. “The inflation of labour costs, energy prices and the cost of the hardware underpinning AI are all driving up data centre fees. For some organisations, repatriation changes the balance in the relative cost and value of on-premise or hybrid architectures compared to public clouds.” ... There are risks that can come with cloud repatriation. James Hollins, Azure presales solution architect at Advania, highlights the potential to disrupt key services. “Building from scratch on-premises could be complex and risky, especially for organisations that have been heavily invested in cloud-based solutions,” he says. “Organisations accustomed to cloud-first environments may need to acquire or retrain staff to manage on-premises infrastructure, as they will have spent the last few years maintaining and operating in a cloud-first world with a specific skillset.” Repatriation can lead to higher licensing costs for third-party software that many businesses do not anticipate or budget for, he adds. 


Proactive Approaches to Securing Linux Systems and Engineering Applications

With AI taking the world by storm, it is more important than ever for you, as an IT professional, to be vigilant and proactive about security vulnerabilities. The rapid advancement of AI technologies introduces new attack vectors and sophisticated threats, as malicious actors can leverage AI to automate and scale their attacks, potentially exploiting vulnerabilities at an unprecedented rate and complexity, making traditional security measures increasingly challenging to maintain. Your role in implementing these measures is crucial and valued. ... Diligent patch management is critical for maintaining the security and stability of Linux systems and applications. Administrators play a vital role in this process, ensuring that patches are applied promptly and correctly. ... Automation tools and centralized patch management systems are invaluable for streamlining the patch deployment process and reducing human error. These tools ensure that patches are applied consistently across all endpoints, enhancing overall security and operational efficiency. Administrators can patch the system and applications using configuration management tools like Ansible and Puppet. 


The Role of Architects in Managing Non-Functional Requirements

One of the strongest arguments for architects owning NFRs is that non-functional aspects are deeply integrated into the system architecture. For example, performance metrics, scalability, and security protocols are all shaped by architectural decisions such as choice of technology stack, data flow design, and resource allocation. Since architects are responsible for making these design choices, it makes sense that they should also ensure the system meets the NFRs. When architects own NFRs, they can prioritise these elements throughout the design phase, reducing the risk of conflicts or last-minute adjustments that could compromise the system’s stability. This ownership ensures that non-functional aspects are not seen as afterthoughts but rather integral parts of the design process. ... Architects typically have a high-level, end-to-end view of the system, enabling them to understand how various components interact. This holistic perspective allows them to evaluate trade-offs and balance functional and non-functional needs without compromising the integrity of the system. For example, an architect can optimise performance without sacrificing security or usability by making informed decisions that consider all NFRs. 



Quote for the day:

"Nothing ever comes to one, that is worth having, except as a result of hard work." -- Booker T. Washington

Daily Tech Digest - October 05, 2024

Integrating and Scaling AI Solutions with Modular Architecture

The modular AI ecosystem is a fluid environment comprising various players that contribute to the democratization and commoditization of AI technologies. Foundational model providers (e.g., ChatGPT and Koala) create core capabilities and specialized SLMs. Enterprise AI solution providers (e.g., Kore AI and Haptik) build prepackaged and customized domain and industry-specific solutions. AI service providers (e.g., HuggingFace and Scale AI) offer platforms to build AI models and provide services such as data labeling, prompt engineering, and fine-tuning AI models. Infrastructure players (e.g., AWS and Azure) provide cloud services to host AI models, data storage and management solutions, and high-performance computing resources. This ecosystem facilitates the rapid innovation of AI technologies while broadening their reach. ... Adopting modular AI architectures offers significant opportunities but also presents challenges. While the transition and upfront investment can be costly and demanding, particularly for legacy-laden enterprises, the potential benefits — such as enhanced agility, lower costs, and easier access to specialized AI tools — are interesting.


Why cloud security outranks cost and scalability

As businesses integrate cloud computing, they grapple with escalating complexity and cyberthreats. To remain agile and competitive, they embrace cloud-native design principles, an operational model that allows for independence and scalability through microservices and extensive API usage. However, this does not come without its challenges. ... Complex cloud environments mean that adopting cloud-native designs introduces layers of complexity. Ensuring security across distributed components (microservices and APIs) becomes crucial, as misconfigurations or vulnerabilities can lead to significant risks. I’ve been screaming about this for years, along with others. Although we accept complexity as a means to an end in terms of IT, it needs to be managed in light of its impact on security. Compliance and regulatory pressures mean that many industries face strict regulations regarding data protection and privacy (e.g., GDPR, CCPA). Ensuring compliance requires robust security measures to protect sensitive information in the cloud. Many enterprises are moving to sovereign or local clouds that are local to the laws and regulations they adhere to. Companies view this as reducing risk; even if those clouds are more expensive, the risk reduction is worth it.


Kaspersky confirmed the issue on the company's official forums on Sunday and said that it's currently investigating why its software is no longer available on Google's app store. "The downloads and updates of Kaspersky products are temporarily unavailable on the Google Play store," a Kaspersky employee said. "Kaspersky is currently investigating the circumstances behind the issue and exploring potential solutions to ensure that users of its products can continue downloading and updating their applications from Google Play." While the apps are unavailable, Kaspersky advised users to install them from alternative app stores, including the Galaxy Store, Huawei AppGallery, and Xiaomi GetApps. The company's security apps can also be installed by downloading the .apk installation file from Kaspersky's website. This support page provides more information on how to install and activate Kaspersky's software on Android devices. This comes after Kaspersky told BleepingComputer in July that it would shut down its United States operations after the U.S. government sanctioned the company and 12 executives and banned Kaspersky antivirus software over national security concerns in June.


How to Get Going with CTEM When You Don't Know Where to Start

Continuous Threat Exposure Management (CTEM) is a strategic framework that helps organizations continuously assess and manage cyber risk. It breaks down the complex task of managing security threats into five distinct stages: Scoping, Discovery, Prioritization, Validation, and Mobilization. Each of these stages plays a crucial role in identifying, addressing, and mitigating vulnerabilities - before they can be exploited by attackers. ... As transformational as CTEM is, many teams see the list above and understandably back off, feeling it's too complex and nuanced of an undertaking. Since the inception of CTEM, some teams have chosen to forgo the benefits, because even with a roadmap, it seems just too cumbersome of a lift for them. The most productive way to make CTEM a very attainable reality is with a unified approach to CTEM that simplifies implementation by integrating all the multiple stages of CTEM into one cohesive platform. ... XM Cyber's unified approach to CTEM simplifies implementation by integrating multiple stages into one cohesive platform. This minimizes the complexity associated with deploying disparate tools and processes. 


Microsoft Sees Devs Embracing a ‘Paradigm Shift’ to GenAIOps

“One of the key differences with GenAI compared to classic machine learning is that in almost all cases, the GenAI model was not built by the developers’ organization; rather it licensed it or accessed it via an API or downloaded it from an open source repository such as Hugging Face,” Patience told The New Stack. “That puts a greater importance on choosing the right models for the task. Contrast that with narrower predictive models using classic machine learning which were usually built and trained using the organization’s own data.” Many LLMs are massive in size and GenAIOps will bring a more orderly process to collecting, curating, cleaning, and creating proper data sets and the proper measured creation of models with specific checkpoints, Andy Thurai, principal analyst at Constellation Research, told The New Stack. “Otherwise, it will lead to chaos for many reasons,” Thurai said. “This can also lead to huge infrastructure costs if the models are not trained properly. So far, many developers use random techniques and procedures to create ML models or even LLMs. These defined processes, technologies, and procedures bring some order to the creation, deployment, and maintenance of those models.”


How Tech Companies Are Readying IT Security For Quantum Computing

When preparing for PQC, a good place to start is to identify all the points of encryption in your organization. Start with sensitive areas including VPN, external server access and remote access. IT leaders should also identify the cryptographic methods you’re currently using and think about how your organization can upgrade to post-quantum standards in the future. Some encryption methods that are currently in use are particularly vulnerable to future quantum computers. For example, a method called RSA (named after Ron Rivest, Adi Shamir and Leonard Adleman, who publicly described the algorithm in 1977) encrypts a large portion of internet traffic. While this method uses prime factors that are difficult for traditional computers to decode, it’s much easier for a quantum computer. Prior to a powerful quantum computer being released, organizations will need to replace RSA. Fortunately, there are many options to do this. One is to double the number of bits current RSA encryption uses from 2048 to 4,096. This number is difficult for even quantum computers to crack. The same goes for other encryption schemes. By increasing the problem size, you can make it much harder to solve.


Why MFA alone won’t protect you in the age of adversarial AI

“MFA changed the game for a long time,” said Caulfield. “But what we’ve found over the past 5 years with these recent identity attacks is that MFA can easily be defeated.” One of the greatest threats to MFA is social engineering or more personalized psychological tactics. Because people put so much of themselves online — via social media or LinkedIn — attackers have free reign to research anyone in the world. Thanks to increasingly sophisticated AI tools, stealthy threat actors can craft campaigns “at mass scale,” said Caulfield. They will initially use phishing to access a user’s primary credential, then employ AI-based outreach to trick them into sharing a second credential or take action that allows attackers into their account. Or, attackers will spam the secondary MFA SMS or push notification method causing “MFA fatigue,” when the user eventually gives in and pushes “allow.” Threat actors will also prime victims, making situations seem urgent, or fool them into thinking they’re getting legitimate messages from an IT help desk. With man-in-the-middle attacks, meanwhile, an attacker can intercept a code during transmission between user and provider.


How Functional Programming Can Help You Write Efficient, Elegant Web Applications

Functional programming might seem intimidating and overly academic at first, but once you get the hang of it, it's a game-changer and a lot of fun on top of it! To better understand how functional programming can help us build more maintainable software, let's start from the beginning and understand why a program becomes harder and harder to maintain as it becomes more significant. ... Another advantage of pure functions is that they are easy to test for the above reasons. There is no need to mock objects because every function depends only on its inputs, and there is no need to set up and verify internal states at the end of the tests because they don't have any. Finally, using immutable data and pure functions dramatically simplifies the parallelisation of tasks across multiple CPUs and machines on the network. For this reason, many of the so-called "big data" solutions have adopted functional architectures. However, there are no silver bullets in computer programming. Both the functional approach and the object-oriented approach have tradeoffs. If your application has a very complex mutable state that is primarily local, it may take much work to model in a functional design.


AI has a stupid secret: we’re still not sure how to test for human levels of intelligence

Traditional human IQ tests have long been controversial for failing to capture the multifaceted nature of intelligence, encompassing everything from language to mathematics to empathy to sense of direction. There’s an analagous problem with the tests used on AIs. There are many well established tests covering such tasks as summarising text, understanding it, drawing correct inferences from information, recognising human poses and gestures, and machine vision. Some tests are being retired, usually because the AIs are doing so well at them, but they’re so task-specific as to be very narrow measures of intelligence. For instance, the chess-playing AI Stockfish is way ahead of Magnus Carlsen, the highest scoring human player of all time, on the Elo rating system. Yet Stockfish is incapable of doing other tasks such as understanding language. Clearly it would be wrong to conflate its chess capabilities with broader intelligence. But with AIs now demonstrating broader intelligent behaviour, the challenge is to devise new benchmarks for comparing and measuring their progress. One notable approach has come from French Google engineer François Chollet. He argues that true intelligence lies in the ability to adapt and generalise learning to new, unseen situations.


How CISOs are navigating the “slippery” AI data protection problem

The problem, according to Hudson, is that policing this is “too slippery” and that as soon as businesses say no to their staff, or block access to the platforms, they simply find ways to circumvent these measures. Hudson asked a panel of CISOs at leading financial institutions in the US on how they were navigating this landscape fraught with potential privacy violations. Togai Andrews, CISO at the Bureau of US Engraving and Printing, said he had been working on developing a governance policy to allow the use of generative AI technology in a responsible way but struggled to back up this policy with effective technical controls. Andrews said this failure to enforce the policy was laid bare in a recent internal report on employee use of generative AI in the office, noting that he was virtually powerless to prevent it. “A month ago I got a report that stated about 40% of our users were using [tools like] Copilot, Grammarly, or ChatGPT to make reports and to summarize internal documents, but I had no way of stopping it.” He explained that as a result he has changed his approach to ensuring employees have a better grasp of the data risks associated with using such tools in their day-to-day workflow.



Quote for the day:

"Hold yourself responsible for a higher standard than anybody expects of you. Never excuse yourself." -- Henry Ward Beecher