Daily Tech Digest - June 19, 2024

Executive Q&A: Data Quality, Trust, and AI

Data observability is the process of interrogating data as it flows through a marketing stack -- including data used to drive an AI process. Data observability provides crucial visibility that helps users both interrogate data quality and understand the level of data quality prior to building an audience or executing a campaign. Data observability is traditionally done through visual tools such as charts, graphs, and Venn diagrams, but is itself becoming AI-driven, with some marketers using natural language processing and LLMs to directly interrogate the data used to fuel AI processes. ... In a way, data silos are as much a source of great distress to AI as they are to the customer experience itself. A marketer might, for example, use a LLM to help generate amazing email subject lines, but if AI generates those subject lines knowing only what is happening in that one channel, it is limited by not having a 360-degree view of the customer. Each system might have its own concept of a customer’s identity by virtue of collecting, storing, and using different customer signals. When siloed data is updated on different cycles, marketers lose the ability to engage with a customer in the precise cadence of the customer because the silos are out of synch with a customer journey.

Only 10% of Organizations are Doing Full Observability. Can Generative AI Move the Needle?

The potential applications of Generative AI in observability are vast. Engineers could start their week by querying their AI assistant about the weekend’s system performance, receiving a concise report that highlights only the most pertinent information. This assistant could provide real-time updates on system latency or deliver insights into user engagement for a gaming company, segmented by geography and time. Imagine being able to enjoy your weekend and arrive at work with a calm and optimistic outlook on Monday morning, and essentially saying to your AI assistant: “Good morning! How did things go this weekend?” or “What’s my latency doing right now, as opposed to before the version release?” or “Can you tell me if there have been any changes in my audience, region by region, for the past 24 hours?” These interactions exemplify how Generative AI can facilitate a more conversational and intuitive approach to managing development infrastructure. It’s about shifting from sifting through data to engaging in meaningful dialogue with data, where follow-up questions and deeper insights are just a query away.

The Ultimate Roadmap to Modernizing Legacy Applications

IT leaders say they plan to spend 42 percent more on average on application modernization because it is seen as a solution to technical debt and a way for businesses to reach their digital transformation goals, according to the 2023 Gartner CIO Agenda. But even with that budget allocated, businesses still face significant challenges, such as cost constraints, a shortage of staff with appropriate technical expertise, and insufficient change management policies to unite people, processes and culture around new software. To successfully navigate the path forward, IT leaders need a strategic roadmap for application modernization. The plan should include prioritizing which apps to upgrade, aligning the effort with business objectives, getting stakeholder buy-in, mapping dependencies, creating data migration checklists and working with trusted partners to get the job done. ... “Even a minor change to the functionality of a core system can have major downstream effects, and failing to account for any dependencies on legacy apps slated for modernization can lead to system outages and business interruptions,” Hitachi Solutions notes in a post.

Is it time to split the CISO role?

In one possible arrangement, a CISO reports to the CEO and a chief security technology officer (CSTO), or technology-oriented security person, reports to the CIO. At a functional level, putting the CSTO within IT gives the CIO a chance to do more integration and collaboration and unites observability and security monitoring. At the executive level, there’s a need to understand security vulnerabilities and the CISO could assist with strategic business risk considerations, according to Oltsik. “This kind of split could bring better security oversight and more established security cultures in large organizations.” ... To successfully change focus, CISOs would need to get a handle on things like the financials and company strategy and articulate cyber controls in this framework, instead of showing up every quarter with reports and warnings. “CISOs will need to incorporate their risk taxonomy into the overall enterprise risk taxonomy,” Joshi says. In this arrangement, however, the budget could arise as a point of contention. CIO budgets tend to be very cyber heavy these days, Joshi explains, and it could be difficult to create the situation where both the CISO and CIO are peers without impacting this allocation of funds.

Empowering IIoT Transformation through Leadership Support

To gain project acceptance and ultimately ensure project success will rely heavily on identifying all key stakeholders, nurturing an on-going level of mutual trust and maintaining a strong focus on targeted end results. This involves a full disclosure of desired outcomes and a willingness to adapt to individual departmental nuances. Begin with a cross-department kickoff/planning meeting to identify interested parties, open projects, and available resources. Invite participation through a discovery meeting, focusing on establishing the core team, primary department, cross-department dependencies, and consolidating open projects or shareable resources. ... Identifying all digital data blind spots at the outset highlights the scale of the problem. While many companies have Artificial Intelligence (AI) and Business Intelligence (BI) initiatives, their success depends on the quality of the source data. Consolidating these initiatives to address digital data blind spots strengthens the data-driven business case. Once a critical mass of baselines is established, projecting Return On Investment (ROI) from both a quantification and qualification perspective becomes possible. 

Will more AI mean more cyberattacks?

Organisations are also potentially exposing themselves to cyber threats through their own use of AI. According to research by law firm Hogan Lovells, 56 per cent of compliance leaders and C-suite executives believe misuse of generative AI within their organisation is a top technology-associated risk that could impact their organisation over the next few years. Despite this, over three-quarters (78 per cent) of leaders say their organisation allows employees to use generative AI in their daily work. One of the biggest threats here is so-called ‘shadow AI’, where criminals or other actors make use of, or manipulate, AI-based programmes to cause harm. “One of the key risks lies in the potential for adversaries to manipulate the underlying code and data used to develop these AI systems, leading to the production of incorrect, biased or even offensive outcomes,” says Isa Goksu, UK and Ireland chief technology officer at Globant. “A prime example of this is the danger of prompt injection attacks. Adversaries can carefully craft input prompts designed to bypass the model’s intended functionality and trigger the generation of harmful or undesirable content.” Jow believes organisations need to wake up to the risk of such activities.

What It Takes to Meet Modern Digital Infrastructure Demands and Prepare for Any IT Disaster

As you evaluate the evolving needs of your organization’s own infrastructure demands, consider whether your network is equipped to handle a growing volume of data-intensive applications — and if your team is ready to act in the face of unexpected service interruption. The push to adopt advanced technologies like AI and automation are the main drivers of network optimization for most organizations. But the growing prevalence of volatile, uncertain, complex, and ambiguous (VUCA) situations is another reason to review your communications infrastructure’s readiness to withstand future challenges. VUCA is a catch-all term for a wide range of unpredictable and challenging situations that can impact an organization’s operations, from natural disasters to political conflict, economic instability, or cyber-attacks. ... Maintaining operational continuity and resilience in the face of VUCA events requires a combination of strategic planning, operational flexibility, technological innovation, and risk-management practices. This includes investing in technology that improves agility and resilience as well as in people who are prepared for adaptive decision-making when VUCA situations arise.

APIs Are the Building Blocks of Bank Innovation. But They Have a Risky Dark Side

A key point is that it’s not just institutions suffering. Frequently APIs used by banks draw on PII (personally identifiable information) such as social security numbers, driver’s license data, medical information and personal financial data. APIs may also handle device and location data. “While this data may not seem as sensitive as PII or payment card details at first glance, it can still be exploited by malicious actors to gain insights into a user’s behavior, preferences and movements,” the report says. “In the wrong hands, this information could be used for targeted phishing attacks, social engineering, or even physical threats.” “Everything in the financial transaction world today is going across the internet, via APIs,” says Bird. ... Bird points out that the bad guys have more than just tools from the dark web to help them do their business. Frequently they tap the same mainstream tools that bankers would use. He laughs when he recalls demonstrating to a reporter how a particular fraud would have been assisted using Excel pivot tables. The journalist didn’t think of criminals using legitimate software. “Why wouldn’t they?” said Bird.

Enterprise AI Requires a Lean, Mean Data Machine

Today’s LLMs need volume, velocity, and variety of data at a rate not seen before, and that creates complexity. It’s not possible to store the kind of data LLMs require on cache memory. High IOPS and high throughput storage systems that can scale for massive datasets are a required substratum for LLMs where millions of nodes are needed. With superpower GPUs capable of lightning-fast read storage read times, an enterprise must have a low-latency, massively parallel system that avoids bottlenecks and is designed for this kind of rigor. ... It’s crucial that these technological underpinnings of the AI era be built with cost efficiency and reduction of carbon footprint in mind. We know that training LLMs and the expansion of generative AI across industries are ramping up our carbon footprint at a time when the world desperately needs to reduce it. We know too that CIOs consistently name cost-cutting as a top priority. Pursuing a hybrid approach to data infrastructure helps ensure that enterprises have the flexibility to choose what works best for their particular requirements and what is most cost-effective to meet those needs.

Building Resilient Security Systems: Composable Security

The concept of composable security represents a shift in the approach to cybersecurity. It involves the integration of cybersecurity controls into architectural patterns, which are then implemented at a modular level. Instead of using multiple standalone security tools or technologies, composable security focuses on integrating these components to work in harmony. ... The concept of resilience in composable security is reflected in a system's ability to withstand and adapt to disruptions, maintain stability, and persevere over time. In the context of microservices architecture, individual services operate autonomously and communicate through APIs. This design ensures that if one service is compromised, it does not impact other services or the entire security system. By separating security systems, the impact of a failure in one system unit is contained, preventing it from affecting the entire system. Furthermore, composable systems can automatically scale according to workload, effectively managing increased traffic and addressing new security requirements.

Quote for the day:

"The task of leadership is not to put greatness into humanity, but to elicit it, for the greatness is already there." -- John Buchan

Daily Tech Digest - June 18, 2024

The Intersection of AI and Wi-Fi 7

Wi-Fi 7 is the newest standard in wireless networking. Though official ratification isn't expected until the end of 2024, Wi-Fi 7 client devices and wireless access points are already available. The top line speed of Wi-Fi 7 is often stated at 46 Gbps, but actual speeds will be lower. The higher speeds of Wi-Fi 7 are delivered by using a 320 MHz wide channel, increasing the transmission rate to 4K QAM and increasing the number of transmit and receive chains to 16. Another key advantage of Wi-Fi 7 is a significant reduction in packet latency, thanks to a feature called Multi-Link Operation (MLO). ... AI Autonomous Networks consolidate key performance indicators to aid decision-making. During the shift from 2.4 GHz and 5 GHz to 6 GHz networking, IT managers can use AI to expose timing and predict improvements, facilitating timely network upgrades. Another example is digital twin architecture, which simulates the network environment using real-world client analytics to model behavior, evaluate security changes, and assess configuration adjustments. The goal is to provide IT managers with tools for timely and accurate decisions.

Linux in your car: Red Hat’s milestone collaboration with exida

Red Hat’s collaboration with exida marks a significant milestone. While it may not be obvious to all of us, Linux is playing an increasingly important role in the automotive industry. In fact, even the car you’re driving today could be using Linux in some capacity. Linux is very well known and appreciated in the automotive industry with increasing attention being paid both to its reliability and its security. The phrase “open source for the open road” is now being used to describe the inevitable fit between the character of Linux and the need for highly customizable code in all sorts of automotive equipment. The safety of vehicles that get us from one place to another on a nearly daily basis has become a serious priority. ... Their focus on ensuring the safety of both individual components and the operating system as a whole is crucial. This latest achievement brings them even closer to realizing the first continuously-certified in-vehicle Linux Red Hat In-Vehicle Operating System. Their open source first approach to the organization, culture and thought process is an exemplary superset of what exida regards as a best practice for world-class safety culture. 

How CIOs Can Integrate AI Among Employees Without Impacting DEI

As technology adoption accelerates, employees risk falling behind in adapting to meet enterprise demands. This trend has been evident across computing eras, from PCs to the current AI and Internet of Things era. Each phase widens the gap between technology introduction and employees’ ability to use it effectively. ... To prioritize DEI in addressing employee upskilling to leverage AI, CIOs can embrace a spectrum of initiatives, from establishing peer mentorship programs to providing access to online courses, workshops, and conferences. The aim is to promote educational opportunities for those most at risk of falling behind, which will increase the cost risk in the future due to the extra cost of retraining staff or seeking new talent. To successfully link digital dexterity to DEI to prepare employees, CIOs should implement a training program that equitably exposes all workforce segments to AI and the machine economy to develop soft and technical skills. Shift the focus of AI adoption away from solely business needs and focus on individual empowerment

What is a CAIO — and what should they know?

CAIOs and others tasked with overseeing AI deployments play an essential role in “shaping an organization’s strategic, informed and responsible use of AI,” he said. “There are many responsibilities baked into the role, but at its core, it’s about steering the direction of AI initiatives and innovation to align with company goals. AI leads must also create a culture of collaboration and continuous learning.” ... While CAIOs might not always be seated at the C-suite table, those who are there are keenly focused on genAI and its potential to drive efficiencies and profits. Without an executive guiding those deployments, achieving the performance and ROI organizations seek will be tough, she said. “It’s hard to imagine how pieces come together and how you’d bring together so many players,” Kosar said, noting that PwC has more than a dozen different LLMs running internally to power AI tools and products in virtually every business unit. “You have to have the ability to do short-term and long-term planning and balance the two and stay focused on innovation,” she continued. “At the same time, you need to recognize the pace of change while not getting distracted by the latest shiny object.”

How AI is impacting data governance

Every organization needs to establish policies around the handling of its data—informed by federal, state, industry, and international regulations as well as internal business rules. In larger enterprises, a data governance committee sets those policies and specifies how they should be followed in a living document that evolves as regulations and procedures change. The natural language capabilities of generative AI can pop out first drafts of that documentation and make subsequent changes much less onerous. By analyzing data usage patterns, regulatory requirements, and internal workflows, AI can help organizations define and enforce data retention policies and automatically identify data that has reached the end of its useful life. ... AI-powered disaster recovery systems can help organizations develop sound recovery strategies by predicting potential failure scenarios and establishing preventive measures to minimize downtime and data loss. Backup systems infused with AI can ensure the integrity of backups and, when disaster strikes, automatically initiate recovery procedures to restore lost or corrupted data.

The impact of compliance technology on small FinTech firms

However, smaller firms often struggle to adapt quickly due to resource constraints, leading to a more reactive compliance management approach. For smaller firms, running on thin resources could mean higher risks. Many operate with minimal compliance staff or assign compliance duties to employees who juggle multiple roles. This can stretch employees too thin, making it tough to keep up with regulatory changes or manage conflicts of interest that might jeopardize the firm. The use of basic tools like spreadsheets and emails increases the risk of missing important updates or failing to adequately address identified risks due to the lack of clear ownership and effective action plans. Furthermore, regulatory penalties can disproportionately impact smaller firms that lack the financial buffer to absorb significant fines. The ever-evolving regulatory landscape poses an ongoing risk to compliance. Smaller firms must navigate a vast array of compliance policies and procedures. Even those with dedicated compliance or legal experts face the challenge of sifting through extensive documentation to identify relevant changes. 

Revolutionising firms’ security with SASE

For Indian companies, today is an opportune time to have a well-thought long-term SASE strategy and identify short-term consolidation tactics to achieve your desired SASE model. There may be a change required in the firm’s IT culture to adopt integrated networking and security teams, which involves a shift from silo ways of working to shared control. Because no two SASE journeys are the same, therefore, it is up to enterprises to prepare differently and plan for different or customized outcomes. And the first step to doing so is selecting a trusted partner to help in the assessment of your network and security roadmaps against SASE as the reference architecture. Just as significant as the delivery and operational components of SASE, is having a partner who understands innovation and agility, with an eye towards the future. The partner should be able to assist in technology evaluation, establish proof of value, and recommend adaptations to integrate SASE components – all of which go toward laying the foundation for the firm’s security and network roadmaps. Firms should know that when it comes to executing SASE, it isn’t just done and dusted but a multi-disciplinary project with moving parts.

The Next Phase of the Fintech Revolution: Inside the Disruption and the Challenges Facing Banking

The thing that’s causing the most waves right now, frankly, is the regulators. We had evolved to this architecture where you had fintechs doing their thing. You had sponsor banks of various types underneath who were actually bearing the regulatory burden and holding the cash — things that only banks can really do. And then you had these middleware companies that are generically kind of known as banking as a service companies (BaaS). That architecture, which underpins much of the payments, lending and banking innovation that we’ve seen, has now been called into question by regulators and is being litigated ... The most important theme right now is the implications of generative AI for financial services and, not least of all, retail banking. What’s being funded right now are basically vendors. So, this new crop of technology companies is springing up to serve banks and financial institutions more generally and help them with digital transformation as it relates to generative AI. So, you could think of chatbot companies as being probably the most advanced wedge on this and customer service generally as a way to introduce generative AI, lower OpEx and create more customer delight.

Data Governance and AI Governance: Where Do They Intersect?

AI governance needs to cover the contents of the data fed to and retrieved through AI, in addition to considering the level of AI intelligence. Doing so addresses issues like biases, privacy, use of intellectual property, and misuse of the technology. Consequently, AIG needs to guide what subject matter can be processed through AI, when, and in what contexts. ... AIG and DG share common responsibilities in guiding data as a product that AI systems create and consume, despite their differences. Both governance programs evaluate data integration, quality, security, privacy, and accessibility. ... The data governance team audits the product data pipeline and finds inconsistent data standards and missing attributes feeding into the AI model. However, the AI governance team also identifies opportunities to enhance the recommendation algorithm’s logic for weighting customer preferences. The retailer could resolve the data quality issues through DG while AIG improved the AI model’s mechanics by taking a collaborative approach with both data governance and AI governance perspectives. 

Enhancing security through collaboration with the open-source community

Without funding, it is difficult for open-source projects to get official certifications. So, companies in regulated sectors that need those certifications often can’t use open-source solutions. For the rest, open-source really has “eaten the world.” Most modern tech companies wouldn’t exist without open-source tools, or would have drastically different offerings. ... Too many just download the open-source project and run away. One way for corporate entities to get involved is by contributing bug fixes and small features. This can be done through anonymous email accounts if it’s necessary to keep the company’s involvement private. Companies should also use the results of their security analysis to help improve the original project. There is some self-interest involved here. Why should a company use its resources to maintain proprietary patches for an open-source project when it can instead send those patches back and have the community maintain them for free? Google has been doing a good job of this with their OSS-FUZZ project. It has found many bugs and helped a large number of the open-source projects using it.

Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie

Daily Tech Digest - June 17, 2024

The agility and flexibility to adjust sizing as needed are critical for resilience. "You can upscale or downscale based on growth, and this is why resilience is important. In case of any economic or external risks, we are prepared to run our operations, and our IT services are equipped to meet business demands," she said. ... Integrating technology to connect with suppliers and customers is crucial for mitigating risks and enhancing collaboration. "Accurate and agile information through integrations is crucial for resilience," Fernando said. Hela Clothing has automated plant and manufacturing flows to provide real-time visibility and efficiency, ensuring data availability for the business to recalibrate and move swiftly in response to capacity issues. "Security, data governance and skilled talent are the fulcrum of any resilient strategy," Najam said. Data is the new oil, but it needs to be refined to be valuable. Proper data governance and management are crucial for high-confidence analytics. Building strong partnerships with vendors and customers is also essential for a resilient organization. 

The 10 biggest issues IT faces today

All the work around AI has further highlighted the value of data — for the organizations and hackers alike. That, along with the ever-increasing sophistication of the bad actors and the consequences of suffering an attack, has turned up the heat on CIOs. “Indications are that hackers/ransomware agents are becoming more aggressive. At the same time, operations and decision-making are increasingly dependent on data availability and accuracy. Meanwhile, the perimeter of exposure widens as remote workers and connected devices proliferate. This is an arms race, and the CIO must lead the charge by implementing better tools and training,” Carco says. ... Swingtide’s Carco sees a related issue that many CIOs face today, which is effectively managing vendors as the number of providers within the IT function dramatically grows. “CIOs are coming to recognize that an organization built for internal operations is not well-suited to managing dozens or hundreds of external providers, and the proliferation of contractual obligations can be overwhelming. In case of emergency, knowing who has your data, what their contractual obligations are to safeguard it, and how they are performing has become extremely difficult,” Carco says.

Solving the Challenges of Multicloud Cost Management

The issue starts with the mere task of importing billing data from cloud providers. Although all major public clouds generate bills that detail what you spent each month, they expose the billing data in a different way. Amazon Web Services (AWS) generates a large CSV file... For its part, Azure expects customers to import billing data using APIs. This means that simply getting all of your billing data into a central location requires implementing multiple data importation workflows. Once the data is centralized, comparing it can be challenging because each cloud provider structures billing data a bit differently. ... Consider, too, that GCP breaks cloud server spending into separate costs for compute and memory. AWS and Azure don't do this; they report billing information based only on the total resources consumed by a cloud server. Thus, if you use AWS or Azure, you need to disaggregate the data yourself if you want the level of granularity that GCP provides by default — and doing so is important if you need to make an apples-to-apples comparison of what you spend for both compute and memory across clouds.

The rise of SaaS security teams

The challenge of securing a SaaS environment demands a multifaceted security strategy and that starts with a strong SaaS security team. Providing education in line with employee’s job functions is essential. So, for security teams that means ongoing training and professional development opportunities so they are up-to-date on the latest threats and technologies. Training is particularly important when it comes to the tools they’ll be utilizing in order to fully take advantage of the capabilities offered. A security team is only as good as the tools they are given to work with so companies need to make sure that they’re deploying (and updating) advanced security tools that are tailored to cloud applications. Teams also need standardized processes for incident response, regular security assessments, and compliance monitoring as an established workflow lends itself to consistency across an organization especially with the diverse nature of the SaaS ecosystem. While not specific to setting up a security team, once the team is in place zero trust’s principle of “never trust, always verify” will go a long way to strengthening not only a SaaS security posture but that of the entire organization.

Fostering tech innovators through entrepreneurial engineering education

Cultivating an entrepreneurial mind-set is critical in this educational transformation. It encourages students to look beyond conventional career paths and to consider how they can make societal impacts through innovation. Entrepreneurial development cells and similar initiatives within universities play a crucial role by providing mentorship, resources, and support systems that propel students to explore innovative ideas and bring them to life. These programs are pivotal in aiding students to launch their own ventures, thereby enriching the start-up ecosystem directly. Moreover, there is a growing emphasis on integrating emerging technologies such as artificial intelligence (AI), machine learning, blockchain, and the Internet of Things (IoT) into the engineering curriculum and to some extent this has already happened too. This integration ensures that students are not only consumers of technology but also its creators. Project-based problems utilizing these technologies to address practical problems—such as developing AI-driven healthcare solutions or blockchain-based supply chain enhancements—highlight the tangible impacts of a robust educational framework. 

Grooming Employees for Leadership Roles

Empathy is critical to fostering an inclusive work environment and building a culture of trust. As a leader, one has to empower the team to have the courage to take risks, make tough decisions in the face of adversity, and remove the fear of failure. This is possible with open and honest communication with teams and continuous encouragement to proactively drive ‘difficult’ projects. Also, I believe a well-acknowledged team is a highly motivated team. Recognising achievers in your team keeps them motivated to keep outperforming and also fosters a culture of continuous improvement. While we keep achieving new milestones and grow exponentially as an organisation, it is equally important to recognise our talent and motivate them to grow. ... As a leader, one has to set clear expectations with their teams, provide support and mentoring, create a safe space for experimentation, encourage cross-functional collaboration, empower teams to take decisions, and invest in technology and infrastructure. As an organisation, Welspun Corp encourages its people to keep acquiring new knowledge, refine their skills, and adapt to change. 

Disaster recovery vs ransomware recovery: Why CISOs need to plan for both

Many organizations approach disaster recovery and cyber incident response measures from a compliance perspective. They want to check all the required boxes, which means that sometimes, “they do the bare minimum,” says Igor Volovich, vice president of compliance strategy at Qmulos. While doing this is necessary, it is not sufficient. The better approach, he suggests, would be to treat compliance requirements as a detailed guide and adopt a more holistic view based on data that is automatically collected, analyzed, and reported in real time. This involves, of course, strengthening the security posture, as well as developing or updating a thorough disaster recovery plan. ... When it comes to creating the resilience strategy, Ramakrishnan recommends having separate plans for different potential crises and storing them in physical folders in the network operations center, in addition to electronic copies. “While electronic access is crucial, physical documentation provides a tangible backup and is easily accessible in situations where digital systems may be compromised,” he says.

Preparing Your DevOps Workforce for the Shifting Landscape of Tech Talent

Traditionally, the tech industry placed a high value on academic degrees. However, as the pace of technological change accelerates, the skills required to succeed in this field are evolving rapidly. ... The move towards skills-based hiring is a response to the pragmatic needs of the industry. Hiring managers are increasingly prioritizing candidates with tangible skills and certifications directly applicable to the projects and technologies at hand. This approach opens up opportunities for a wider pool of talent, including those who may have taken non-traditional paths to acquire their skills. ... By prioritizing upskilling and cross-skilling initiatives, companies can cultivate a versatile and adaptable workforce ready to tackle the challenges of emerging technologies. It is much like applying DevOps practices to staff development - being agile with iterative learning and adaptation and continuously updating your skills. ... As we look to the future, one thing is clear: the tech industry's approach to talent is undergoing a profound transformation. Yesterday's methods won't train tomorrow's workforce. 

Here’s When To Use Write-Ahead Log and Logical Replication in Database Systems

Logical replication provides benefits compared to methods such as WAL. Firstly, it offers the advantage of replication, allowing for the replication of tables or databases rather than all changes, which enhances flexibility and efficiency. Secondly, it enables replication, facilitating synchronization across types of databases, especially useful in environments with diverse systems. Moreover, logical replication grants control over replication behavior including conflict resolution and data transformation, leading to accurate data synchronization management. Depending on the setup, logical replication can function asynchronously or synchronously, providing options to prioritize performance or data consistency based on requirements. These capabilities establish replication as a robust tool for maintaining synchronized data in distributed systems. Logical replication presents a level of adaptability for administrators by allowing them to select which data to replicate for targeted synchronization purposes. This feature streamlines the process by replicating tables or databases and reducing unnecessary workload. 

When to ignore — and believe — the AI hype cycle

Established tech incumbents and startups are transforming their technology platforms simultaneously and big technology platform providers are also displaying an incredible amount of agility in adapting. This translates into a much more rapid evolution of the build with gen AI stacks compared to what we saw in the early days of the build with the cloud. If compute and data are the currency of innovation in gen AI, we have to ask ourselves where are startups sustainably positioned versus established tech incumbents who have structural advantages and more access to compute. Higher up in the stack, the opportunity in applications seems quite vast — but given where we are in the hype cycle, the reliability of AI outputs, the regulatory landscape and advancements in cybersecurity posture are key gating factors that need to be addressed for commercial adoption at scale. Lastly, foundation models have achieved the performance they have due to pre-training on internet scale datasets. What still lies ahead to realize the benefits of AI is the ability to assemble large, high-quality datasets to build models in more industry-specific domains. 

Quote for the day:

“If you're doing your best, you won't have any time to worry about failure.” -- H. Jackson Brown, Jr.

Daily Tech Digest - June 16, 2024

Human and AI Partnership Drives Manufacturing and Distribution Forward

Industry 5.0 offers a promising solution to the persistent challenge of labor shortages. By fostering a symbiotic dynamic between humans and robots, it lightens the resourcing burden. Human workers bring adaptability and problem-solving skills to the table, while robots contribute to speed and precision in task handling. This collaboration not only boosts job satisfaction and productivity but also promotes employee skill development and reduces overall errors. Moreover, for any hazardous tasks, Industry 5.0 assigns robots to handle physically demanding or risky duties, enhancing safety and minimizing human error in critical situations, thus creating a healthier work environment. It can also significantly enhance supply chain resilience, a critical concern on every manufacturer and distributor’s radar following the recent Red Sea crisis. Leveraging real-time data analytics and AI-driven insights assists human decision-making in predicting and mitigating disruptions. Advanced sensors and IoT devices continuously monitor supply chain activities, including early detection of potential issues such as transportation delays or inventory shortages. 

Beyond Traditional: Why Cybersecurity Needs Neurodiversity

Neurodiverse individuals often exhibit exceptional logical and methodical thinking, attention to detail, and cognitive pattern recognition skills. For example, they can hyperfocus on tasks, giving complete attention to specific issues for prolonged periods, which is invaluable in identifying and mitigating security threats. Their ability to engage deeply in their work ensures that even the smallest anomalies are detected and addressed swiftly. Moreover, many neurodiverse individuals thrive on repetitive tasks and routines, finding comfort and even excitement in long, monotonous processes. This makes them well-suited for roles that involve continuous monitoring and analysis of security data. Their high levels of concentration and persistence allow them to stay on task until solutions are found, ensuring thorough and effective problem-solving. Creativity is another significant benefit that neurodiverse individuals bring to cybersecurity. Their unique, nonlinear thinking enables them to approach problems from different angles and develop innovative solutions. This creativity is crucial for devising new methods to counteract evolving cyber threats. 

Missing Links: How to ID Supply Chain Risks

Current events seem to indicate that supply chain resilience is something companies need to master, sooner rather than later. To get there, they need real-time, end-to-end visibility into supply chain issues and the ability to proactively plan for various types of supply chain risks. “We have discussed the next best action for decades in our supply chains and operations, but realistically, we have never had the flexibility in our process and systems to enable that,” says Protiviti’s Petrucci. “As the world is adopting cloud and more cloud-native design and thinking it will enable us to move close to breaking away from the traditional systems and design more capable supply chain risk, execution, and next best action capabilities. We have started to enable our customers in moving in this direction.” ... The increasing risk of being tied to one region is now at the highest level ever, and I believe we’ll continue to see a shift in supplier sourcing strategies, with the pendulum swinging towards regional diversification,” says Fictiv’s Evans. “Regional optionality continues to be top of mind for supply chain leaders based on geopolitical uncertainties and the need to mitigate risk where possible. 

Human I/O: Detecting situational impairments with large language models

Situational impairments can vary greatly and change frequently, which makes it difficult to apply one-size-fits-all solutions that help users with their needs in real-time. For example, think about a typical morning routine: while brushing their teeth, someone might not be able to use voice commands with their smart devices. When washing their face, it could be hard to see and respond to important text messages. And while using a hairdryer, it might be difficult to hear any phone notifications. Even though various efforts have created solutions tailored for specific situations like these, creating manual solutions for every possible situation and combination of challenges isn't really feasible and doesn't work well on a large scale. ... Rather than devising individual models for activities like face-washing, tooth-brushing, or hair-drying, Human Input/Output (Human I/O) universally assesses the availability of a user’s vision (e.g., to read text messages, watch videos), hearing (e.g., to hear notifications, phone calls), vocal (e.g., to have a conversation, use Google Assistant), and hand (e.g., to use touch screen, gesture control) input/output interaction channels.

Do IDEs Make You Stupid?

An IDE can be an indispensable tool when used to help a developer think better. But when it’s used as a means of automation while removing the developer’s need to understand the underlying tasks of modern computer programming, an IDE can be a detriment. No doubt, an IDE provides a benefit by automating programming tasks that are tedious and repetitive, or even those tasks that require the programmer to do a lot of typing. Still, those commands are there for a reason, and a developer would do well to understand the details of what they’re about and why they need to be done. ... The “hiding the math” aspect of using an IDE might not matter to senior developers who have the experience and insight to understand the hidden details that an IDE has automated. However, for an entry-level developer, using an IDE without understanding what it’s doing behind the scenes can limit the developer’s ability to do the type of more advanced work that’s needed to progress in their career. Knowing the details is important. ... An IDE can improve cognitive ergonomics, but you must want it to. Passive interaction with the tool will get you only so far. 

How to streamline data center sustainability governance

Achieving sustainability goals requires an extensive understanding of energy systems – specifically how, where, and when power is used. Eaton’s Brightlayer Data Centers suite includes the industry’s first digital platform that natively integrates asset management, IT and operational technology (OT) device monitoring, IT automation, power quality metrics, and one-line diagrams into a single, configurable application. Leveraging decades of expertise in the data center industry (from low- and medium-voltage switchgear and transformers to uninterruptible power supplies, battery storage, and power distribution units) this platform consolidates information traditionally siloed in disparate applications. ... More effective data and reporting on sustainability will help future-proof compliance, uncover opportunities to reduce resource consumption, increase customer satisfaction, and differentiate businesses. This approach improves data center performance by applying digitalization to make assets work harder, smarter, and more sustainably.

Why we don't have 128-bit CPUs

You might think 128-bit isn’t viable because it’s difficult or impossible, but that’s not the case. Many components in modern processors, like memory buses and SIMD units, already utilize 128-bit or larger sizes for specific tasks. For instance, the AVX-512 instruction set allows for 512-bit wide data processing. These SIMD (Single Instruction, Multiple Data) instructions have evolved from 32-bit to 64-bit, 128-bit, 256-bit, and now 512-bit operands, demonstrating significant advancements in parallel processing capabilities. ... The only significant use cases for 128-bit integers are IPv6 addresses, universally unique identifiers (or UUID) that are used to create unique IDs for users (Minecraft is a high-profile use case for UUID), and file systems like ZFS. The thing is, 128-bit CPUs aren't necessary to handle these tasks, which have been able to exist just fine on 64-bit hardware. Ultimately, the key reason why we don't have 128-bit CPUs is that there's no demand for a 128-bit hardware-software ecosystem. The industry could certainly make it if it wanted to, but it simply doesn't.

A New Tactic in the Rapid Evolution of QR Code Scams

Because the QR code has ASCII characters behind it, security system may ignore it, thinking it’s a clean email. “Attack forms all evolve,” Fuchs wrote. “QR code phishing is no different. It’s unique, though, that the evolution has happened so rapidly. It started off with standard MFA verification codes. These were pretty straight forward, asking users to scan a code, either to re-set MFA or even look at financial data like an annual 401k contribution.” The next iteration – what Fuchs called QR Code Phishing 2.0 – involved conditional routing attacks, where the link adjusts to where the victim is interacting with it. If the target is using an Apple Mac system, one link appears. Another one will appear if the user is on a smartphone running Android. “We also saw custom QR Code campaigns, where hackers are dynamically populating the logo of the company and the correct username,” he wrote. This newest phase (“QR Code 3.0”) is more of a manipulation campaign, where it is using a text-based representation of a QR code rather than a traditional one. “It also represents how threat actors are responding to the landscape,” he wrote. 

'Sleepy Pickle' Exploit Subtly Poisons ML Models

Poisoning a model in this way carries a number of advantages to stealth. For one thing, it doesn't require local or remote access to a target's system, and no trace of malware is left to the disk. Because the poisoning occurs dynamically during deserialization, it resists static analysis. Serialized model files are hefty, so the malicious code necessary to cause damage might only represent a small fraction of the total file size. And these attacks can be customized in any number of ways that regular malware attacks are to prevent detection and analysis. While Sleepy Pickle can presumably be used to do any number of things to a target's machine, the researchers noted, "controls like sandboxing, isolation, privilege limitation, firewalls, and egress traffic control can prevent the payload from severely damaging the user’s system or stealing/tampering with the user’s data." More interestingly, attacks can be oriented to manipulate the model itself. For example, an attacker could insert a backdoor into the model, or manipulate its weights and, thereby, its outputs.

Digital Twins In Meetings? Not Any Time Soon

The benefits of having a digital twin are very interesting. To start, consider productivity, Bloomfilter founder and CEO Erik Severinghaus told Reworked. Your twin could manage everyday tasks and find problems before they become major headaches. However, there are many problems to solve first. The first thing to understand is how exactly these digital twins would copy us. He also raised the question of security, ensuring these AI versions of us cannot be used to create problems in our lives. Finally, while it is often overlooked, organizations need to keep ethical considerations in mind, Severinghaus continued. Are all employees OK with how their data and images get used by these digital twins? And what about future malicious use cases that no one has even imagined yet? ... While Yuan predicted the use of digital twins at an undetermined future date on the podcast, it clearly is still speculative. Let's just say you're safe from attending a meeting with a digital twin for now. However, given where we were with AI just 18 months ago, we suspect Yuan's vision becoming a reality might not be as far off in the future as you'd think.

Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.

Daily Tech Digest - June 15, 2024

Does AI make us dependent on Big Tech?

The assumption is that banks would find it impractical to independently develop the extensive computing power required for AI technologies. Heavy reliance on a small number of tech providers, would pose a significant risk, particularly for European banks. It is further assumed that these banks need to retain the flexibility to switch between different technology vendors to prevent excessive dependence on any one provider, a situation also known as vendor lock-in. And now they want to get the governments involved. The U.K. has proposed new regulations to moderate financial firms’ reliance on external technology companies such as Microsoft, Google, IBM, Amazon, and others. Regulators are specifically concerned that issues at any single cloud computing company could disrupt services across numerous financial institutions. The proposed rules are part of larger efforts to protect the financial sector from systemic risks posed by such concentrated dependence on a few tech giants. In its first statement on AI, the European Union’s securities watchdog emphasized that banks and investment firms must not shirk boardroom responsibility when deploying AI technologies. 

How To Choose An Executive Coach? Remember The 5 C’s

A lot of people might put Congruence first, but if you don’t have Clarity the interpersonal dynamics are a moot point—it’s not just about liking your coach. Once you are clear on your goals and outcomes then you should seek a coach with whom you are willing to be psychologically vulnerable. You should test the potential coach to see if their style resonates with yours. For example, are they direct enough for you? Are they structured and organized, if you need that?  ... You should be looking for Credibility—that is, relevant knowledge and expertise. You’ll learn the most by asking questions to explore the coach’s experience and track record. Has the coach worked with other executives at your level? Do they have a frame of reference for your situation and what you are grappling with? Have they worked in a similar environment and successfully coached others with similar challenges? Do they understand the corporate world and the politics of your type of organization? One thing to keep in mind is that many executives today are not just looking for a coach to help them with finding their own solutions, but also for “coach-sulting”—which may include advice and counsel on leadership, strategy, organizational development, team building and tactical problem-solving.

New Research Suggests Architectural Technical Debt Is Most Damaging to Applications

“Architectural challenges and a lack of visibility into architecture throughout the software development lifecycle prevent businesses from reaching their full potential,” said Moti Rafalin, CEO and co-founder of vFunction, a company promoting AI-driven architectural observability and sponsor of the study. “Adding to this, the rapid accumulation of technical debt hampers engineering velocity, limits application scalability, impacts resiliency, and amplifies the risk of outages, delayed projects, and missed opportunities.” Monolithic architectures bear the brunt of the impact, with 57% of organizations allocating over a quarter of their IT budget to technical debt remediation, compared to 49% for microservices architectures. Companies with monolithic architectures are also 2.1 times more likely to face issues with engineering velocity, scalability, and resiliency. However, microservices architectures are not immune to technical debt challenges, with 53% of organizations experiencing delayed major technology migrations or platform upgrades due to productivity concerns.

Surge in Attacks Against Edge and Infrastructure Devices

Not just criminals but also state-sponsored attackers have been exploiting such devices, Google Cloud's Mandiant threat intelligence unit recently warned. One challenge for defenders: Many network edge devices function as "black boxes which are not easily examined or monitored by network administrators," and also lack antimalware or other endpoint detection and response capabilities, WithSecure's report says. "It is difficult for network administrators to verify they are secure, and they often must take it on trust. Certain types of these devices also provide edge services and so are internet-accessible." Many of these devices don't by default produce detailed logs that defenders can monitor using security incident and event management tools to watch for signs of attack. "These devices are supposed to secure our networks, but by itself, there's no way I can install an AV client on it, or an EDR client, or say, 'Hey, give me some fancy logs about what is happening on the device itself,'" said Christian Beek, senior director of threat analytics at Rapid7, in an interview at Infosecurity Europe 2024. 

Edge Devices: The New Frontier for Mass Exploitation Attacks

The attraction to edge devices comes from easier entry; and they provide easier and greater stealth once compromised. Since they often provide a continuous service, they are rarely switched off. Vendors design them for continuity, so purposely make them difficult or impossible for administrator control beyond predefined options. Indeed, any such individual activity can void warranties. They frequently do not produce logs of their activity that can be analyzed by SIEMs, and they cannot be monitored by standard security controls. In this sense they are similar to the OT demand for continuity — why fix something that ain’t broke? Until it is broke, by which time it is probably too late. The result is that edge devices and services often comprise software components that can be decades old involving operating systems that are well beyond end of life; and they are effectively cybersecurity’s forgotten man. Once inside, an attacker is hidden and can plan and execute the attack over time and out of sight. “Edge services are often internet accessible, unmonitored, and provide a rapid route to privileged local or network credentials on a server with broad access to the internal network,” says the report.

Quantum Computing and AI: A Perfect Match?

Quantum AI is already here, but it's a silent revolution, OrĂºs says. "The first applications of quantum AI are finding commercial value, such as those related to LLMs, as well as in image recognition and prediction systems," he states. More quantum AI applications will become available as quantum computers grow more powerful. "It's expected that in two-to-three years there will be a broad range of industrial applications of quantum AI." Yet the road ahead may be rocky, Li warns. "It's well known that quantum hardware suffers from noise that can destroy computation," he says. "Quantum error correction promises a potential solution, but that technology isn't yet available." ... GenAI and quantum computing are mind-blowing advances in computing technology, says Guy Harrison, enterprise architect at cybersecurity technology company OneSpan, in a recent email interview. "AI is a sophisticated software layer that emulates the very capabilities of human intelligence, while quantum computing is assembling the very building blocks of the universe to create a computing substrate," he explains.

How to Offboard Departing IT Staff Members

Some terminations are not amicable, however, and those cases require immediate action. The IT department must implement an emergency revocation procedure that involves the instantaneous deactivation of all of the employee’s access credentials across all systems. Immediate action minimizes the risk of retaliatory actions or data breaches, which are heightened concerns in such scenarios. ... Departing employees often leave behind a trail of licenses and subscriptions for various software and online services used during their tenure. IT departments must undertake a thorough assessment of these digital assets to determine which licenses remain necessary, which can be reallocated and which should be terminated, based on current and anticipated needs. ... Hardware retrieval is an aspect of offboarding that requires at least as much diligence as digital access revocation — and often more, given the number of remote employees that many businesses have. All devices issued to employees — laptops, tablets, smartphones, ID cards and more — must be returned, thoroughly inspected and wiped of sensitive information before they are reassigned or decommissioned.

Integrating Transfer Learning and Data Augmentation for Enhanced Machine Learning Performance

Concretely, the first step consists of applying data augmentation techniques, including flipping, noise injection, rotation, cropping, and color space augmentation, to augment the volume of target domain data. Secondly, a transfer learning model, utilizing ResNet50 as the backbone, extracts transferable features from raw image data. The model’s loss function integrates cross-entropy loss for classification and a distance metric function between source and target domains. By minimizing this combined loss function, the model aims to simultaneously improve classification accuracy on the target domain while aligning the distributions of the source and target domains The experiments compared an enhanced transfer learning method with conventional ones across datasets like Office-31 and pneumonia X-rays. Different models, including DAN and DANN, were tested using various techniques like discrepancy-based and adversarial approaches. The enhanced method, incorporating data augmentation, consistently outperformed others, especially when source and target domains were more similar. 

OIN expands Linux patent protection yet again (but not to AI)

Keith Bergelt, OIN's CEO, emphasized the importance of this update, stating, "Linux and other open-source software projects continue to accelerate the pace of innovation across a growing number of industries. By design, periodic expansion of OIN's Linux System definition enables OIN to keep pace with OSS's growth." Bergelt explained that this update reflects OIN's well-established process of carefully maintaining a balance between stability and incorporating innovative core open-source technologies into the Linux System definition. The latest additions result from OIN's consensus-driven update process. "OIN is also trying to make patent protection more accessible," he added. "We're trying to make it easier for people to understand what's in there and why it's in there, what it relates to, what projects it relates to, and what it means to developers and laymen as well as lawyers." Looking ahead, Bergelt said, "We made this conscious decision not to include AI. It's so dynamic. We wait until we see what AI programs have significant usage and adoption levels." This is how the OIN has always worked. The consortium takes its time to ensure it extends its protection to projects that will be around for the long haul.

Beyond Sessions: Centering Users in Mobile App Observability

The main use case for tracking users explicitly in backend data is the potential to link them to your mobile data. This linkage provides additional attributes that can then be associated with the request that led to slow backend traces. For example, you can add context that may be too expensive to be tracked directly in the backend, like the specific payload blobs for the request, but that is easily collectible on the client. For mobile observability, tracking users explicitly is of paramount importance. In this space, platforms, and vendors recognize that modeling a user’s experience is essential because knowing the totality and sequencing of the activities around the time a user experiences performance problems is key for debugging. By grouping temporally related events for a user and presenting them in a chronologically sorted order, they have created what has become de rigueur in mobile observability: the user session. Presenting telemetry this way allows mobile developers to spot patterns and provide explanations as to why performance problems occur. 

Quote for the day:

“Every adversity, every failure, every heartache carries with it the seed of an equal or greater benefit.” -- Napoleon Hill

Daily Tech Digest - June 14, 2024

State Machine Thinking: A Blueprint For Reliable System Design

State machines are instrumental in defining recovery and failover mechanisms. By clearly delineating states and transitions, engineers can identify and code for scenarios where the system needs to recover from an error, failover to a backup system or restart safely. Each state can have defined recovery actions, and transitions can include logic for error handling and fallback procedures, ensuring that the system can return to a safe state after encountering an issue. My favorite phrase to advocate here is: “Even when there is no documentation, there is no scope for delusion.” ... Having neurodivergent team members can significantly enhance the process of state machine conceptualization. Neurodivergent individuals often bring unique perspectives and problem-solving approaches that are invaluable in identifying states and anticipating all possible state transitions. Their ability to think outside the box and foresee various "what-if" scenarios can make the brainstorming process more thorough and effective, leading to a more robust state machine design. This diversity in thought ensures that potential edge cases are considered early in the design phase, making the system more resilient to unexpected conditions.

How to Build a Data Stack That Actually Puts You in Charge of Your Data

Sketch a data stack architecture that delivers the capabilities you've deemed necessary for your business. Your goal here should be to determine what your ideal data stack looks like, including not just which types of tools it will include, but also which personnel and processes will leverage those tools. As you approach this, think in a tool-agnostic way. In other words, rather than looking at vendor solutions and building a stack based on what's available, think in terms of your needs. This is important because you shouldn't let tools define what your stack looks like. Instead, you should define your ideal stack first, and then select tools that allow you to build it. ... Another critical consideration when evaluating tools is how much expertise and effort are necessary to get tools to do what you need them to do. This is important because too often, vendors make promises about their tools' capabilities — but just because a tool can theoretically do something doesn't mean it's easy to do that thing with that tool. A data discovery tool that requires you to install special plugins or write custom code to work with a legacy storage system you depend on.

IT leaders go small for purpose-built AI

A small AI approach has worked for Dayforce, a human capital management software vendor, says David Lloyd, chief data and AI officer at the company. Dayforce uses AI and related technologies for several functions, with machine learning helping to match employees at client companies to career coaches. Dayforce also uses traditional machine learning to identify employees at client companies who may be thinking about leaving their jobs, so that the clients can intervene to keep them. Not only are smaller models easier to train, but they also give Dayforce a high level of control over the data they use, a critical need when dealing with employee information, Lloyd says. When looking at the risk of an employee quitting, for example, the machine learning tools developed by Dayforce look at factors such as the employee’s performance over time and the number of performance increases received. “When modeling that across your entire employee base, looking at the movement of employees, that doesn’t require generative AI, in fact, generative would fail miserably,” he says. “At that point you’re really looking at things like a recurrent neural network, where you’re looking at the history over time.”

Why businesses need ‘agility and foresight’ to stay ahead in tech

In the current IT landscape, one of the most pressing challenges is the evolving threat of cyberattacks, particularly those augmented by GenAI. As GenAI becomes more sophisticated, it introduces new complexities for cybersecurity with cybercriminals leveraging it to create advanced attack vectors. ... Several transformative technologies are reshaping our industry and the world at large. At the forefront of these innovations is GenAI. Over the past two years, GenAI has moved from theory to practice. While GenAI has fostered many creative ideas in 2023 of how it will transform business, GenAI projects are starting to become business-ready with visible productivity gains becoming evident. Transformative technology also holds a strong promise to have a profound impact on cybersecurity, offering advanced capabilities for threat detection and incident response from a cybersecurity standpoint. Organisations will need to use their own data for training and fine-tuning models, conducting inference where data originates. Although there has been much discussion about zero trust within our industry, we’re now seeing it evolve from a concept to a real technology. 

Who Should Run Tests? On the Future of QA

QA is a funny thing. It has meant everything from “the most senior engineer who puts the final stamp on all code” to “the guy who just sort of clicks around randomly and sees if anything breaks.” I’ve seen seen QA operating in all different levels of the organization, from engineers tightly integrated with each team to an independent, almost outside organization. A basic question as we look at shifting testing left, as we put more testing responsibility with the product teams, is what the role of QA should be in this new arrangement. This can be generalized as “who should own tests?” ... If we’re shifting testing left now, that doesn’t mean that developers will be running tests for the first time. Rather, shifting left means giving developers access to a complete set of highly accurate tests, and instead of just guessing from their understanding of API contracts and a few unit tests that their code is working, we want developers to be truly confident that they are handing off working code before deploying it to production. It’s a simple, self-evident principle that when QA finds a problem, that should be a surprise to the developers. 

Implementing passwordless in device-restricted environments

Implementing identity-based passwordless authentication in workstation-independent environments poses several unique challenges. First and foremost is the issue of interoperability and ensuring that authentication operates seamlessly across a diverse array of systems and workstations. This includes avoiding repetitive registration steps which lead to user friction and inconvenience. Another critical challenge, without the benefit of mobile devices for biometric authentication, is implementing phishing and credential theft-resistant authentication to protect against advanced threats. Cost and scalability also represent significant hurdles. Providing individual hardware tokens to each user is expensive in large-scale deployments and introduces productivity risks associated with forgotten, lost, damaged or shared security keys. Lastly, the need for user convenience and accessibility cannot be understated. Passwordless authentication must not only be secure and robust but also user-friendly and accessible to all employees, irrespective of their technical expertise. 

Modern fraud detection need not rely on PII

A fraud detection solution should also retain certain broad data about the original value, such as whether an email domain is free or corporate, whether a username contains numbers, whether a phone number is premium, etc. However, pseudo-anonymized data can still be re-identified, meaning if you know two people’s names you can tell if and how they have interacted. This means it is still too sensitive for machine learning (ML) since models can almost always be analyzed to regurgitate the values that go in. The way to deal with that is to change the relationships into features referencing patterns of behavior, e.g., the number of unique payees from an account in 24 hours, the number of usernames associated with a phone number or device, etc. These features can then be treated as fully anonymized, exported and used in model training. In fact, generally, these behavioral features are more predictive than the original values that went into them, leading to better protection as well as better privacy. Finally, a fraud detection system can make good use of third-party data that is already anonymized. 

Deepfakes: Coming soon to a company near you

Deepfake scams are already happening, but the size of the problem is difficult to estimate, says Jake Williams, a faculty member at IANS Research, a cybersecurity research and advisory firm. In some cases, the scams go unreported to save the victim’s reputation, and in other cases, victims of other types of scams may blame deepfakes as a convenient cover for their actions, he says. At the same time, any technological defenses against deepfakes will be cumbersome — imagine a deepfakes detection tool listening in on every phone call made by employees — and they may have a limited shelf life, with AI technologies rapidly advancing. “It’s hard to measure because we don’t have effective detection tools, nor will we,” says Williams, a former hacker at the US National Security Agency. “It’s going to be difficult for us to keep track of over time.” While some hackers may not yet have access to high-quality deepfake technology, faking voices or images on low-bandwidth video calls has become trivial, Williams adds. Unless your Zoom meeting is of HD or better quality, a face swap may be good enough to fool most people.

A Deep Dive Into the Economics and Tactics of Modern Ransomware Threat Actors

A common trend among threat actors is to rely on older techniques but allocate more resources and deploy them differently to achieve greater success. Several security solutions organizations have long relied on, such as multi-factor authentication, are now vulnerable to circumvention with very minimal effort. Specifically, organizations need to be aware of the forms of MFA factors they support, such as push notifications, pin codes, FIDO keys and legacy solutions like SMS text messages. The latter is particularly concerning because SMS messaging has long been considered an insecure form of authentication, managed by third-party cellular providers, thus lying outside the control of both employees and their organizations. In addition to these technical forms of breaches, the tried-and-true method of phishing is still viable. Both white hat and black hat tools continue to be enhanced to exploit common MFA replay techniques. Like other professional tools used by security testers like Cobalt Strike used by threat actors to maintain persistence on compromised systems, MFA bypass/replay tools have also gotten more professional. 

Troubleshooting Windows with Reliability Monitor

Reliability Monitor zeroes in on and tracks a limited set of errors and changes on Windows 10 and 11 desktops (and earlier versions going back to Windows Vista), offering immediate diagnostic information to administrators and power users trying to puzzle their way through crashes, failures, hiccups, and more. ... There are many ways to get to Reliability Monitor in Windows 10 and 11. At the Windows search box, if you type reli you’ll usually see an entry that reads View reliability history pop up on the Start menu in response. Click that to open the Reliability Monitor application window. ... Knowing the source of failures can help you take action to prevent them. For example, certain critical events show APPCRASH as the Problem Event Name. This signals that some Windows app or application has experienced a failure sufficient to make it shut itself down. Such events are typically internal to an app, often requiring a fix from its developer. Thus, if I see a Microsoft Store app that I seldom or never use throwing crashes, I’ll uninstall that app so it won’t crash any more. This keeps the Reliability Index up at no functional cost.

Quote for the day:

"Success is a state of mind. If you want success start thinking of yourself as a sucess." -- Joyce Brothers