Daily Tech Digest - June 25, 2024

Six Strategies For Making Smarter Decisions

Broaden your options - Instead of Options A and B, what about C or even D? A technique I use in working with client organizations is to set up a “challenge statement” that inevitably reveals multiple possibilities to be decided upon. I’ll have small groups of four or five people take 10 minutes to list all the options without discussing or critiquing them during the exercise. Frame challenge statements thusly: “In what ways might we accomplish X?” ... Listen to your gut - Intuition is knowing something without knowing quite how we know it. All of us have it, but in a data-driven world, listening to it becomes harder. Before making an important choice, one executive I interviewed gathers information, weighs all the facts – then takes time to stop and listen to what his gut is telling him. “When a decision doesn’t feel good,” one executive commented, “It feels like a stomachache. And when a decision feels right, it’s like I’ve eaten a great meal. If I don’t feel good in my gut about a decision, I don’t care if the numbers say we’re going to make a billion dollars, I won’t go ahead with it. That’s how important intuition is to me.”


Overcoming Stagnation And Implementing Change To Facilitate Business Growth: The How-To

Overcoming stagnation is about understanding that doing the same thing over and over again will give you the same results over and over again. But bringing about change in the former will naturally impact the latter. The three main objectives in any transformation initiative that aims to set up a strong foundation to scale or grow a business are: become financially lean with the ability to scale either up or down as per market demands, become internally efficient, and to run its day-to-day operations independent of its founder or leader. ... Ideally, it would be wise to aim to maintain 60-70% of the total operating cost as fixed costs, while keeping the remaining as variable costs, allowing for flexibility to adjust the costing structure based on business needs, while maintaining profitability throughout the transition- and beyond. When an efficient business achieves this level of financial optimization and is managed by a competent team, then the founder or leader will have the time to work on the business, concentrating on long-term growth strategic issues, instead of the day-to-day of the enterprise.


Build your resilience toolkit: 3 actionable strategies for HR leaders

Go beyond current job descriptions to identify talent or skill gaps. Focus on future-focused talent acquisition strategies and design upskilling and reskilling programs. Aim to close the skills gap and attract talent with transferable skill sets and a growth mindset. This approach keeps your workforce adaptable and prepared for future challenges. ... Adapting work models and fostering continuous learning cultures are essential. HR leaders can implement flexible work arrangements, such as remote or hybrid models. Encouraging experimentation and risk-taking within teams, and integrating continuous learning opportunities into performance management systems, are key actionable tips. Agile approaches help HR leaders adapt quickly to shifting business requirements. Collaborative work environments are critical in an agile HR strategy. ... Open communication and safe spaces are essential for a supportive culture. HR leaders can encourage employees to voice concerns by creating channels for open dialogue. This approach ensures employees feel heard and valued, contributing to a more inclusive workplace.


The 4 skills you need to engineer a career in automation

Automation engineers are often required to work cohesively with multidisciplinary teams and for that reason, it can be useful to have a solid grasp of workplace soft skills, in addition to compulsory hard skills. Automation engineers are expected to take complex, highly nuanced information and relay it back to not only their peers, but to people who do not have a strong technical background. This requires expert communication skills, as well as an ability to collaborate. ... If you are considering a career as an automation engineer, then a foundational understanding of programming languages and how they are applied is compulsory, as you will frequently need to write and maintain the code that keeps operation systems running. The choice of programming language greatly impacts the success of automation in the workplace, as it will provide and improve versatility, scalability and integration. ... As AI advances, global workplaces will have to evolve in tandem, meaning automation engineers will have to have a standard level of AI and machine learning skills to stay competitive. 


Navigating the Evolving World of Cybersecurity Regulations in Financial Services

Accountability for cybersecurity measures is a key element of the NYDFS regulations. CISOs now must provide a report updating their governing body or board of directors on the company’s cybersecurity posture and plans to fix any security gaps, Burke says. Maintaining accountability entails communicating with the board about cybersecurity risks, explains Kirk J. Nahra, partner and co-chair of the cybersecurity and privacy practice at law firm WilmerHale. “The board needs to understand that its job is to evaluate major issues for a company, and a ransomware attack that shuts down the whole business is a major risk,” Nahra says. “The boards have to become more sophisticated about information security.” ... The NYDFS calls for organizations to have cybersecurity policies that are reviewed and approved annually. Previously, regulations concentrated more on processes and best practices, Nahra says. Now, they are becoming more prescriptive, but multiple regulators are inconsistent, and their standards may conflict at times.


How Banks Can Get Past the AI Hype and Deliver Real Results

If the bank’s backend systems aren’t automated, all the rapidly responding chatbot has done is make a promise that a human will have to solve when they finally get to that point in the inbox, Bandyopadhyay says. When they ultimately get back to the customer, that efficient chatbot doesn’t actually look so efficient. Bandyopadhyay explains that this is merely meant as an illustration of the bank has to be ready for front ends and back ends of customer-facing systems to be in synch. The potential result is alienating customers with significant problems. ... The real power of GenAI is its ability to digest and deploy unstructured data. But Bandyopadhyay points out that most banks use legacy systems that can’t capture any of that information. “It’s not data that you put in rows and columns on a spreadsheet,” says Bandyopadhyay. “It’s what language we write and that we speak.” To truly implement GenAI in the long run, he continues, banks will have to lick the longstanding legacy systems problem. Until then, most of their databases aren’t talking GenAI’s language.


Singapore lays the groundwork for smart data center growth

In a move that stunned industry observers, Singapore announced on May 30 that it would release more data center capacity to the tune of 300MW, a substantial figure and a new policy direction for the nation-state. ... The 300MW will come as part of a newly unveiled Green Data Centre (DC) Roadmap drawn up by IMDA, so it does have conditions attached. According to the statutory board, the roadmap was developed to chart a “sustainable pathway” for the continued growth of data centers in Singapore to support the nation’s digital economy. Per the roadmap, Singapore hopes to work with the industry to pioneer solutions for more resource-efficient data centers. One way to view it is as a carrot that it can use to spur data center operators to innovate and accelerate data center efficiency on both hardware and software levels. It is all well and good to talk about allocating hundreds of megawatts of capacity for data centers. But with electrical grids around the world heaving from electrification and sharply rising power demands, is Singapore in a position to deliver this capacity to data center operators today?


Information Blocking of Patient Records Could Cost Providers

Information blocking is defined as a practice that is likely to interfere with the access, exchange or use of electronic health information, except as required by law or specified in one of nine information blocking exceptions. ... Under the security exception, it is not considered information blocking for an actor to interfere with the access, exchange or use of EHI to protect the security of that information, provided certain conditions are met. For example, during a security incident, such as a ransomware attack, a healthcare provider might be unable to provide access or exchange to certain EHI for a time, and that would not constitute information blocking. ... So, as of now, if a healthcare provider does not participate in any of the CMS payment programs that are currently subject to the disincentives, they do not face any potential penalties for information blocking. But that could change moving forward. HHS officials during a briefing with media on Monday said HHS is considering adding other disincentives for healthcare providers that do not participate in such CMS programs. 


How is AI transforming the insurtech sector?

The use of AI also brings risks and ethical considerations for insurers and insurtech firms. “With all AI, you need to understand where the AI models are from and where the data is being trained from and, importantly, whether there is an in-built bias,” says Kevin Gaut, chief technology officer at insurtech INSTANDA. “Proper due diligence on the data is the key, even with your own internal data.” It’s essential, too, that organisations can explain any decisions that are taken, warns Muylle, and that there is at least some human oversight. “A notable issue is the black-box nature of some AI algorithms that produce results without explanation,” he warns. “To address this, it’s essential to involve humans in the decision-making loop, establish clear AI principles and involve an AI review board or third party. Companies can avoid pitfalls by being transparent with their AI use and co-operating when questioned.” AI applications themselves also raise the potential for organisations to get caught out in cyber-attacks. “Perpetrators can use generative AI to produce highly believable yet fraudulent insurance claims,” points out Brugger. 


Evaluating crisis experience in CISO hiring: What to look for and look out for

So long as a candidate’s track record is verifiable and clear in its contribution to intrusion events, direct experience of a crisis may actually be more indicative of future success than more traditional metrics. By contrast, be wary of the “onlookers,” those individuals with qualifications but whose learned experience comes from arm’s length involvement in a crisis. While such persons may contribute positively to their organization, the role of the crisis in their hiring should be de-emphasized relative to more conventional metrics of future performance. ... The emerging consensus of research is that being present for multiple stages of the response lifecycle — being impacted by an attack’s disruptions or helping with preparedness for a future response — is far better experience than simply witnessing an attack. Those who experience the initial effects of a compromise or other attack and then go on to orient, analyze, and engage in mitigation activities are the ones for whom over-generalization and perverse informational reactions appear less likely.



Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden

Daily Tech Digest - June 20, 2024

Measure Success: Key Cybersecurity Resilience Metrics

“Cyber resilience is a newer concept. It can get thrown around when one really means cybersecurity, and also in cases where no one really cares about the difference between the two,” says Mike Macado, CISO at BeyondTrust, an identity and access security company. “And to be fair, there can be some blurring between the two. ... “Once the resilience objectives are clear, KPIs can be set to measure them. While there are many abstract possible KPIs, it is crucial to set meaningful and measurable KPIs that can indicate your cyber resilience level and not only tick the box,” says Kellerman. And what are the meaningful, core KPIs? “These include mean time to detect, mean time to respond, recovery time objective, recovery point objective, percentage of critical systems with exposures, employee awareness and phishing click-rates, and an overall assessment of leadership. These KPIs will properly assess your security controls and whether they are protecting your critical path assets, helping to ensure they’re capable of preventing threats.” Kellerman adds. ... “The ability to recover from a cybersecurity attack within a reasonable time that guarantees business continuity is a crucial indicator of resilience...” says Joseph Nwankpa


Most cybersecurity pros took time off due to mental health issues

“Cybersecurity professionals are at the forefront of a battle they know they are going to lose at some point, it is just a matter of time. It’s a challenging industry and businesses need to recognize that without motivation, cybersecurity professionals won’t be at the top of their game. We’ve worked with both cybersecurity and business leaders to understand the challenges the industry faces. What we’ve discovered shows just how difficult the job is and that there is a significant gap of understanding between the board and the professionals,” said Haris Pylarinos, CEO at Hack The Box. “We’re calling for business leaders to work more closely with cybersecurity professionals to make mental well-being a priority and actually provide the solutions they need to succeed. It’s not just the right thing to do, it makes business sense,” concluded Pylarinos. “Stress, burnout and mental health in cybersecurity is at an all-time high. It’s also not just the junior members of the team, but right up to the CISO level too,” said Sarb Sembhi, CTO at Virtually Informed.


Forget Deepfakes: Social Listening Might be the Most Consequential Use of Generative AI in Politics

Ultimately, the most vulnerable individuals likely to be affected by these trends are not voters; they are children. AI chatbots are already being piloted in classrooms. “Children are once again serving as beta testers for a new generation of digital tech, just as they did in the early days of social media,” writes Caroline Mimbs Nyce for The Atlantic. The risks from generative AI outputs are well documented, from hallucinatory responses to search inquiries to synthetic nonconsensual sexual imagery. Given the rapid normalization of surveillance in education technology, more attention should probably be paid to the inputs such systems collect from kids. ... Not every AI problem requires a policy solution specific to AI: a federal data privacy law that applied to campaigns and political action committees would go a long way toward regulating generative AI-enabled social listening, and could have been put in place long before that technology became widely accessible. The fake Biden robocalls in New Hampshire similarly commend low-tech responses to high-tech problems: the political consultant behind them is charged not with breaking any law against AI fakery but with violating laws against voter suppression.


Resilience in leadership: Navigating challenges and inspiring success

Research shows that cultivating resilience is a long and arduous journey that requires self-awareness, emotional intelligence, and a relentless commitment to personal growth. A great example of this quality and a leader I admire greatly is Jensen Huang, President of Nvidia, which is now one of the most valuable companies in the world with a market cap of more than $2 trillion. As Huang describes quite candidly in many interviews, his early years and the hardships he endured helped him build resilience, where he learnt to brush things off and move on no matter how difficult the situation was. While addressing the grad students at Stanford Graduate School of Business, Huang revealed that “I wish upon you ample doses of pain and suffering,” as he believes great character is only formed out of people who have suffered. These experiences have not only helped Huang develop a robust management style but have also helped him approach any problem with the mindset of “How hard can it be?” While Jensen’s life exemplifies the importance of hardships and suffering, resilience isn't limited to overcoming hardships; it's also about innovation and adaptability in leadership. 


IDP vs. Self-Service Portal: A Platform Engineering Showdown

It’s easy to get lost in the sea of IT acronyms at the best of times, and the platform engineering ecosystem is the same, particularly given that these two options seem to promise similar things but deliver quite differently. For many organizations, choosing or building an IDP might be what they think is required to save their developers from repetitive work while looking for a self-service portal (SSP) to streamline automation. ... By providing a user-friendly interface to define and deploy cloud resources, an SSP frees up the time and effort required to set up complex infrastructure configurations. Centralizing resources provides oversight while also enabling guardrails to be established to protect against “shadow IT” being deployed. This not only helps identify resources that aren’t being used to save money but also helps make cloud practices more eco-friendly by removing unnecessary resources. This is the main difference between an SSP and an IDP, and understanding which capabilities an organization needs is critical for ensuring a smooth platform engineering journey. Like a Russian doll, an IDP is a layer on top of an SSP that offers tools to streamline the entire software development lifecycle.


Chinese Hackers Used Open-Source Rootkits for Espionage

Attackers exploited an unauthenticated remote command execution zero-day on VMware vCenter tracked as CVE-2023-34048. If the threat group failed to gain initial access on the VMware servers, the attackers targeted similar flaws in FortiOS, a flaw in VMware vCenter called postgresDB, or a VMware Tools flaw. After compromising the edge devices, the group's pattern has been to deploy open-source Linux rootkit Reptile to target virtual machines hosted on the appliance. It uses four rootkit components to capture secure shell credentials. These include Reptile.CMD to hide files, processes and network connections; Reptile.Shell to listen to specialized packets- a kernel level file to modify the .CMD file to achieve rootkit functionality; and a loadable kernel file for decrypting the actual module and loading it into the memory. "Reptile appeared to be the rootkit of choice by UNC3886 as it was observed being deployed immediately after gaining access to compromised endpoints," Mandiant said. "Reptile offers both the common backdoor functionality, as well as stealth functionality that enables the threat actor to evasively access and control the infected endpoints via port knocking."


What are the benefits of open access networks?

Toomey says there are various benefits to open access networks, with a key benefit being the fostering of competition. “This competition drives innovation as providers strive to offer the best services and technologies to attract and retain customers,” she said. “Additionally, open access networks can reduce costs for service providers by sharing infrastructure, leading to more affordable services for end-users. “These networks also promote greater network efficiency and resource utilisation, benefiting the entire telecom ecosystem.” But there are challenges with building an open access network, as Toomey said there are high costs in building and maintaining the necessary infrastructure. Enet invested €50m in 2022 to expand its fibre network, but saw its profits fall 47pc to €3.7m in the same year. “Additionally, there is a risk of overbuild, where multiple networks are constructed in the same area, leading to inefficient resource use,” Toomey said. “Another challenge is the centralised thinking on network roll-out in cities, which can neglect rural and underserved areas, creating a digital divide. “Addressing these challenges requires strategic planning and investment, as well as collaboration with government and industry stakeholders to ensure balanced network development.”


CIOs take note: Platform engineering teams are the future core of IT orgs

The core roles in a platform engineering team range from infrastructure engineers, software developers, and DevOps tool engineers, to database administrators, quality assurance, API and security engineers, and product architects. In some cases teams may also include site reliability engineers, scrum masters, UI/UX designers, and analysts who assess performance data to identify bottlenecks. And according to Joe Atkinson, chief products and technology officer at PwC, these teams offer a long list of benefits to IT organizations, including building and maintaining scalable, flexible infrastructure and tools that enable efficient operations; developing standardized frameworks, libraries, and tools to enable rapid software development; cutting costs by consolidating infrastructure resources; and ensuring security and compliance at the infrastructure level. ... You can’t have a successful platform engineering team without building the right culture, says Jamie Holcombe, USPTO CIO. “If you don’t inspire the right behavior then you’ll get people who point at each other when something goes wrong.” And don’t withhold information, he adds. 


What is the current state of Security Culture in Europe?

Organizations prioritizing the establishment and upkeep of a security culture will encourage notably heightened security awareness behaviors among their employees. Examining this further, research has shown that organizations in Europe have a good understanding of security culture as both a process and a strategic measure. However, many have yet to take their first tactical steps toward achieving that goal. Those who have done so realize that shaping security behaviors is essential in developing a security culture. ... Delving deeper, smaller European organisations score higher in security culture due to more effective personal communication, stronger community bonds and better support for security issues. This naturally leads to enhanced Cognition and Compliance, with improvements in communication channels posited as a key driver for better security policy understanding and proactive security behaviours that outperform global averages. Conducting an examination of which industries displayed the best security culture within Europe, it is certainly gaining traction among security experts within sectors like finance, banking and IT, which are all heavily digitized.


Data Integrity: What It Is and Why It Matters

While data integrity focuses on the overall reliability of data in an organization, Data Quality considers both the integrity of the data and how reliable and applicable it is for its intended use. Preserving the integrity of data emphasizes keeping it intact, fully functional, and free of corruption for as long as it is needed. This is done primarily by managing how the data is entered, transmitted, and stored. By contrast, Data Quality builds on methods for confirming the integrity of the data and also considers the data’s uniqueness, timeliness, accuracy, and consistency. Data is considered “high quality” when it ranks high in all these areas based on the assessment of data analysts. High-quality data is considered trustworthy and reliable for its intended applications based on the organization’s data validation rules. The benefits of data integrity and Data Quality are distinct, despite some overlap. Data integrity allows a business to recover quickly and completely in the event of a system failure, prevent unauthorized access to or modification of the data, and support the company’s compliance efforts. 



Quote for the day:

“Failures are finger posts on the road to achievement.” -- C.S. Lewis

Daily Tech Digest - June 19, 2024

Executive Q&A: Data Quality, Trust, and AI

Data observability is the process of interrogating data as it flows through a marketing stack -- including data used to drive an AI process. Data observability provides crucial visibility that helps users both interrogate data quality and understand the level of data quality prior to building an audience or executing a campaign. Data observability is traditionally done through visual tools such as charts, graphs, and Venn diagrams, but is itself becoming AI-driven, with some marketers using natural language processing and LLMs to directly interrogate the data used to fuel AI processes. ... In a way, data silos are as much a source of great distress to AI as they are to the customer experience itself. A marketer might, for example, use a LLM to help generate amazing email subject lines, but if AI generates those subject lines knowing only what is happening in that one channel, it is limited by not having a 360-degree view of the customer. Each system might have its own concept of a customer’s identity by virtue of collecting, storing, and using different customer signals. When siloed data is updated on different cycles, marketers lose the ability to engage with a customer in the precise cadence of the customer because the silos are out of synch with a customer journey.


Only 10% of Organizations are Doing Full Observability. Can Generative AI Move the Needle?

The potential applications of Generative AI in observability are vast. Engineers could start their week by querying their AI assistant about the weekend’s system performance, receiving a concise report that highlights only the most pertinent information. This assistant could provide real-time updates on system latency or deliver insights into user engagement for a gaming company, segmented by geography and time. Imagine being able to enjoy your weekend and arrive at work with a calm and optimistic outlook on Monday morning, and essentially saying to your AI assistant: “Good morning! How did things go this weekend?” or “What’s my latency doing right now, as opposed to before the version release?” or “Can you tell me if there have been any changes in my audience, region by region, for the past 24 hours?” These interactions exemplify how Generative AI can facilitate a more conversational and intuitive approach to managing development infrastructure. It’s about shifting from sifting through data to engaging in meaningful dialogue with data, where follow-up questions and deeper insights are just a query away.


The Ultimate Roadmap to Modernizing Legacy Applications

IT leaders say they plan to spend 42 percent more on average on application modernization because it is seen as a solution to technical debt and a way for businesses to reach their digital transformation goals, according to the 2023 Gartner CIO Agenda. But even with that budget allocated, businesses still face significant challenges, such as cost constraints, a shortage of staff with appropriate technical expertise, and insufficient change management policies to unite people, processes and culture around new software. To successfully navigate the path forward, IT leaders need a strategic roadmap for application modernization. The plan should include prioritizing which apps to upgrade, aligning the effort with business objectives, getting stakeholder buy-in, mapping dependencies, creating data migration checklists and working with trusted partners to get the job done. ... “Even a minor change to the functionality of a core system can have major downstream effects, and failing to account for any dependencies on legacy apps slated for modernization can lead to system outages and business interruptions,” Hitachi Solutions notes in a post.


Is it time to split the CISO role?

In one possible arrangement, a CISO reports to the CEO and a chief security technology officer (CSTO), or technology-oriented security person, reports to the CIO. At a functional level, putting the CSTO within IT gives the CIO a chance to do more integration and collaboration and unites observability and security monitoring. At the executive level, there’s a need to understand security vulnerabilities and the CISO could assist with strategic business risk considerations, according to Oltsik. “This kind of split could bring better security oversight and more established security cultures in large organizations.” ... To successfully change focus, CISOs would need to get a handle on things like the financials and company strategy and articulate cyber controls in this framework, instead of showing up every quarter with reports and warnings. “CISOs will need to incorporate their risk taxonomy into the overall enterprise risk taxonomy,” Joshi says. In this arrangement, however, the budget could arise as a point of contention. CIO budgets tend to be very cyber heavy these days, Joshi explains, and it could be difficult to create the situation where both the CISO and CIO are peers without impacting this allocation of funds.


Empowering IIoT Transformation through Leadership Support

To gain project acceptance and ultimately ensure project success will rely heavily on identifying all key stakeholders, nurturing an on-going level of mutual trust and maintaining a strong focus on targeted end results. This involves a full disclosure of desired outcomes and a willingness to adapt to individual departmental nuances. Begin with a cross-department kickoff/planning meeting to identify interested parties, open projects, and available resources. Invite participation through a discovery meeting, focusing on establishing the core team, primary department, cross-department dependencies, and consolidating open projects or shareable resources. ... Identifying all digital data blind spots at the outset highlights the scale of the problem. While many companies have Artificial Intelligence (AI) and Business Intelligence (BI) initiatives, their success depends on the quality of the source data. Consolidating these initiatives to address digital data blind spots strengthens the data-driven business case. Once a critical mass of baselines is established, projecting Return On Investment (ROI) from both a quantification and qualification perspective becomes possible. 


Will more AI mean more cyberattacks?

Organisations are also potentially exposing themselves to cyber threats through their own use of AI. According to research by law firm Hogan Lovells, 56 per cent of compliance leaders and C-suite executives believe misuse of generative AI within their organisation is a top technology-associated risk that could impact their organisation over the next few years. Despite this, over three-quarters (78 per cent) of leaders say their organisation allows employees to use generative AI in their daily work. One of the biggest threats here is so-called ‘shadow AI’, where criminals or other actors make use of, or manipulate, AI-based programmes to cause harm. “One of the key risks lies in the potential for adversaries to manipulate the underlying code and data used to develop these AI systems, leading to the production of incorrect, biased or even offensive outcomes,” says Isa Goksu, UK and Ireland chief technology officer at Globant. “A prime example of this is the danger of prompt injection attacks. Adversaries can carefully craft input prompts designed to bypass the model’s intended functionality and trigger the generation of harmful or undesirable content.” Jow believes organisations need to wake up to the risk of such activities.


What It Takes to Meet Modern Digital Infrastructure Demands and Prepare for Any IT Disaster

As you evaluate the evolving needs of your organization’s own infrastructure demands, consider whether your network is equipped to handle a growing volume of data-intensive applications — and if your team is ready to act in the face of unexpected service interruption. The push to adopt advanced technologies like AI and automation are the main drivers of network optimization for most organizations. But the growing prevalence of volatile, uncertain, complex, and ambiguous (VUCA) situations is another reason to review your communications infrastructure’s readiness to withstand future challenges. VUCA is a catch-all term for a wide range of unpredictable and challenging situations that can impact an organization’s operations, from natural disasters to political conflict, economic instability, or cyber-attacks. ... Maintaining operational continuity and resilience in the face of VUCA events requires a combination of strategic planning, operational flexibility, technological innovation, and risk-management practices. This includes investing in technology that improves agility and resilience as well as in people who are prepared for adaptive decision-making when VUCA situations arise.


APIs Are the Building Blocks of Bank Innovation. But They Have a Risky Dark Side

A key point is that it’s not just institutions suffering. Frequently APIs used by banks draw on PII (personally identifiable information) such as social security numbers, driver’s license data, medical information and personal financial data. APIs may also handle device and location data. “While this data may not seem as sensitive as PII or payment card details at first glance, it can still be exploited by malicious actors to gain insights into a user’s behavior, preferences and movements,” the report says. “In the wrong hands, this information could be used for targeted phishing attacks, social engineering, or even physical threats.” “Everything in the financial transaction world today is going across the internet, via APIs,” says Bird. ... Bird points out that the bad guys have more than just tools from the dark web to help them do their business. Frequently they tap the same mainstream tools that bankers would use. He laughs when he recalls demonstrating to a reporter how a particular fraud would have been assisted using Excel pivot tables. The journalist didn’t think of criminals using legitimate software. “Why wouldn’t they?” said Bird.


Enterprise AI Requires a Lean, Mean Data Machine

Today’s LLMs need volume, velocity, and variety of data at a rate not seen before, and that creates complexity. It’s not possible to store the kind of data LLMs require on cache memory. High IOPS and high throughput storage systems that can scale for massive datasets are a required substratum for LLMs where millions of nodes are needed. With superpower GPUs capable of lightning-fast read storage read times, an enterprise must have a low-latency, massively parallel system that avoids bottlenecks and is designed for this kind of rigor. ... It’s crucial that these technological underpinnings of the AI era be built with cost efficiency and reduction of carbon footprint in mind. We know that training LLMs and the expansion of generative AI across industries are ramping up our carbon footprint at a time when the world desperately needs to reduce it. We know too that CIOs consistently name cost-cutting as a top priority. Pursuing a hybrid approach to data infrastructure helps ensure that enterprises have the flexibility to choose what works best for their particular requirements and what is most cost-effective to meet those needs.


Building Resilient Security Systems: Composable Security

The concept of composable security represents a shift in the approach to cybersecurity. It involves the integration of cybersecurity controls into architectural patterns, which are then implemented at a modular level. Instead of using multiple standalone security tools or technologies, composable security focuses on integrating these components to work in harmony. ... The concept of resilience in composable security is reflected in a system's ability to withstand and adapt to disruptions, maintain stability, and persevere over time. In the context of microservices architecture, individual services operate autonomously and communicate through APIs. This design ensures that if one service is compromised, it does not impact other services or the entire security system. By separating security systems, the impact of a failure in one system unit is contained, preventing it from affecting the entire system. Furthermore, composable systems can automatically scale according to workload, effectively managing increased traffic and addressing new security requirements.



Quote for the day:

"The task of leadership is not to put greatness into humanity, but to elicit it, for the greatness is already there." -- John Buchan

Daily Tech Digest - June 18, 2024

The Intersection of AI and Wi-Fi 7

Wi-Fi 7 is the newest standard in wireless networking. Though official ratification isn't expected until the end of 2024, Wi-Fi 7 client devices and wireless access points are already available. The top line speed of Wi-Fi 7 is often stated at 46 Gbps, but actual speeds will be lower. The higher speeds of Wi-Fi 7 are delivered by using a 320 MHz wide channel, increasing the transmission rate to 4K QAM and increasing the number of transmit and receive chains to 16. Another key advantage of Wi-Fi 7 is a significant reduction in packet latency, thanks to a feature called Multi-Link Operation (MLO). ... AI Autonomous Networks consolidate key performance indicators to aid decision-making. During the shift from 2.4 GHz and 5 GHz to 6 GHz networking, IT managers can use AI to expose timing and predict improvements, facilitating timely network upgrades. Another example is digital twin architecture, which simulates the network environment using real-world client analytics to model behavior, evaluate security changes, and assess configuration adjustments. The goal is to provide IT managers with tools for timely and accurate decisions.


Linux in your car: Red Hat’s milestone collaboration with exida

Red Hat’s collaboration with exida marks a significant milestone. While it may not be obvious to all of us, Linux is playing an increasingly important role in the automotive industry. In fact, even the car you’re driving today could be using Linux in some capacity. Linux is very well known and appreciated in the automotive industry with increasing attention being paid both to its reliability and its security. The phrase “open source for the open road” is now being used to describe the inevitable fit between the character of Linux and the need for highly customizable code in all sorts of automotive equipment. The safety of vehicles that get us from one place to another on a nearly daily basis has become a serious priority. ... Their focus on ensuring the safety of both individual components and the operating system as a whole is crucial. This latest achievement brings them even closer to realizing the first continuously-certified in-vehicle Linux Red Hat In-Vehicle Operating System. Their open source first approach to the organization, culture and thought process is an exemplary superset of what exida regards as a best practice for world-class safety culture. 


How CIOs Can Integrate AI Among Employees Without Impacting DEI

As technology adoption accelerates, employees risk falling behind in adapting to meet enterprise demands. This trend has been evident across computing eras, from PCs to the current AI and Internet of Things era. Each phase widens the gap between technology introduction and employees’ ability to use it effectively. ... To prioritize DEI in addressing employee upskilling to leverage AI, CIOs can embrace a spectrum of initiatives, from establishing peer mentorship programs to providing access to online courses, workshops, and conferences. The aim is to promote educational opportunities for those most at risk of falling behind, which will increase the cost risk in the future due to the extra cost of retraining staff or seeking new talent. To successfully link digital dexterity to DEI to prepare employees, CIOs should implement a training program that equitably exposes all workforce segments to AI and the machine economy to develop soft and technical skills. Shift the focus of AI adoption away from solely business needs and focus on individual empowerment


What is a CAIO — and what should they know?

CAIOs and others tasked with overseeing AI deployments play an essential role in “shaping an organization’s strategic, informed and responsible use of AI,” he said. “There are many responsibilities baked into the role, but at its core, it’s about steering the direction of AI initiatives and innovation to align with company goals. AI leads must also create a culture of collaboration and continuous learning.” ... While CAIOs might not always be seated at the C-suite table, those who are there are keenly focused on genAI and its potential to drive efficiencies and profits. Without an executive guiding those deployments, achieving the performance and ROI organizations seek will be tough, she said. “It’s hard to imagine how pieces come together and how you’d bring together so many players,” Kosar said, noting that PwC has more than a dozen different LLMs running internally to power AI tools and products in virtually every business unit. “You have to have the ability to do short-term and long-term planning and balance the two and stay focused on innovation,” she continued. “At the same time, you need to recognize the pace of change while not getting distracted by the latest shiny object.”


How AI is impacting data governance

Every organization needs to establish policies around the handling of its data—informed by federal, state, industry, and international regulations as well as internal business rules. In larger enterprises, a data governance committee sets those policies and specifies how they should be followed in a living document that evolves as regulations and procedures change. The natural language capabilities of generative AI can pop out first drafts of that documentation and make subsequent changes much less onerous. By analyzing data usage patterns, regulatory requirements, and internal workflows, AI can help organizations define and enforce data retention policies and automatically identify data that has reached the end of its useful life. ... AI-powered disaster recovery systems can help organizations develop sound recovery strategies by predicting potential failure scenarios and establishing preventive measures to minimize downtime and data loss. Backup systems infused with AI can ensure the integrity of backups and, when disaster strikes, automatically initiate recovery procedures to restore lost or corrupted data.


The impact of compliance technology on small FinTech firms

However, smaller firms often struggle to adapt quickly due to resource constraints, leading to a more reactive compliance management approach. For smaller firms, running on thin resources could mean higher risks. Many operate with minimal compliance staff or assign compliance duties to employees who juggle multiple roles. This can stretch employees too thin, making it tough to keep up with regulatory changes or manage conflicts of interest that might jeopardize the firm. The use of basic tools like spreadsheets and emails increases the risk of missing important updates or failing to adequately address identified risks due to the lack of clear ownership and effective action plans. Furthermore, regulatory penalties can disproportionately impact smaller firms that lack the financial buffer to absorb significant fines. The ever-evolving regulatory landscape poses an ongoing risk to compliance. Smaller firms must navigate a vast array of compliance policies and procedures. Even those with dedicated compliance or legal experts face the challenge of sifting through extensive documentation to identify relevant changes. 


Revolutionising firms’ security with SASE

For Indian companies, today is an opportune time to have a well-thought long-term SASE strategy and identify short-term consolidation tactics to achieve your desired SASE model. There may be a change required in the firm’s IT culture to adopt integrated networking and security teams, which involves a shift from silo ways of working to shared control. Because no two SASE journeys are the same, therefore, it is up to enterprises to prepare differently and plan for different or customized outcomes. And the first step to doing so is selecting a trusted partner to help in the assessment of your network and security roadmaps against SASE as the reference architecture. Just as significant as the delivery and operational components of SASE, is having a partner who understands innovation and agility, with an eye towards the future. The partner should be able to assist in technology evaluation, establish proof of value, and recommend adaptations to integrate SASE components – all of which go toward laying the foundation for the firm’s security and network roadmaps. Firms should know that when it comes to executing SASE, it isn’t just done and dusted but a multi-disciplinary project with moving parts.


The Next Phase of the Fintech Revolution: Inside the Disruption and the Challenges Facing Banking

The thing that’s causing the most waves right now, frankly, is the regulators. We had evolved to this architecture where you had fintechs doing their thing. You had sponsor banks of various types underneath who were actually bearing the regulatory burden and holding the cash — things that only banks can really do. And then you had these middleware companies that are generically kind of known as banking as a service companies (BaaS). That architecture, which underpins much of the payments, lending and banking innovation that we’ve seen, has now been called into question by regulators and is being litigated ... The most important theme right now is the implications of generative AI for financial services and, not least of all, retail banking. What’s being funded right now are basically vendors. So, this new crop of technology companies is springing up to serve banks and financial institutions more generally and help them with digital transformation as it relates to generative AI. So, you could think of chatbot companies as being probably the most advanced wedge on this and customer service generally as a way to introduce generative AI, lower OpEx and create more customer delight.


Data Governance and AI Governance: Where Do They Intersect?

AI governance needs to cover the contents of the data fed to and retrieved through AI, in addition to considering the level of AI intelligence. Doing so addresses issues like biases, privacy, use of intellectual property, and misuse of the technology. Consequently, AIG needs to guide what subject matter can be processed through AI, when, and in what contexts. ... AIG and DG share common responsibilities in guiding data as a product that AI systems create and consume, despite their differences. Both governance programs evaluate data integration, quality, security, privacy, and accessibility. ... The data governance team audits the product data pipeline and finds inconsistent data standards and missing attributes feeding into the AI model. However, the AI governance team also identifies opportunities to enhance the recommendation algorithm’s logic for weighting customer preferences. The retailer could resolve the data quality issues through DG while AIG improved the AI model’s mechanics by taking a collaborative approach with both data governance and AI governance perspectives. 


Enhancing security through collaboration with the open-source community

Without funding, it is difficult for open-source projects to get official certifications. So, companies in regulated sectors that need those certifications often can’t use open-source solutions. For the rest, open-source really has “eaten the world.” Most modern tech companies wouldn’t exist without open-source tools, or would have drastically different offerings. ... Too many just download the open-source project and run away. One way for corporate entities to get involved is by contributing bug fixes and small features. This can be done through anonymous email accounts if it’s necessary to keep the company’s involvement private. Companies should also use the results of their security analysis to help improve the original project. There is some self-interest involved here. Why should a company use its resources to maintain proprietary patches for an open-source project when it can instead send those patches back and have the community maintain them for free? Google has been doing a good job of this with their OSS-FUZZ project. It has found many bugs and helped a large number of the open-source projects using it.



Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie

Daily Tech Digest - June 17, 2024

The agility and flexibility to adjust sizing as needed are critical for resilience. "You can upscale or downscale based on growth, and this is why resilience is important. In case of any economic or external risks, we are prepared to run our operations, and our IT services are equipped to meet business demands," she said. ... Integrating technology to connect with suppliers and customers is crucial for mitigating risks and enhancing collaboration. "Accurate and agile information through integrations is crucial for resilience," Fernando said. Hela Clothing has automated plant and manufacturing flows to provide real-time visibility and efficiency, ensuring data availability for the business to recalibrate and move swiftly in response to capacity issues. "Security, data governance and skilled talent are the fulcrum of any resilient strategy," Najam said. Data is the new oil, but it needs to be refined to be valuable. Proper data governance and management are crucial for high-confidence analytics. Building strong partnerships with vendors and customers is also essential for a resilient organization. 


The 10 biggest issues IT faces today

All the work around AI has further highlighted the value of data — for the organizations and hackers alike. That, along with the ever-increasing sophistication of the bad actors and the consequences of suffering an attack, has turned up the heat on CIOs. “Indications are that hackers/ransomware agents are becoming more aggressive. At the same time, operations and decision-making are increasingly dependent on data availability and accuracy. Meanwhile, the perimeter of exposure widens as remote workers and connected devices proliferate. This is an arms race, and the CIO must lead the charge by implementing better tools and training,” Carco says. ... Swingtide’s Carco sees a related issue that many CIOs face today, which is effectively managing vendors as the number of providers within the IT function dramatically grows. “CIOs are coming to recognize that an organization built for internal operations is not well-suited to managing dozens or hundreds of external providers, and the proliferation of contractual obligations can be overwhelming. In case of emergency, knowing who has your data, what their contractual obligations are to safeguard it, and how they are performing has become extremely difficult,” Carco says.


Solving the Challenges of Multicloud Cost Management

The issue starts with the mere task of importing billing data from cloud providers. Although all major public clouds generate bills that detail what you spent each month, they expose the billing data in a different way. Amazon Web Services (AWS) generates a large CSV file... For its part, Azure expects customers to import billing data using APIs. This means that simply getting all of your billing data into a central location requires implementing multiple data importation workflows. Once the data is centralized, comparing it can be challenging because each cloud provider structures billing data a bit differently. ... Consider, too, that GCP breaks cloud server spending into separate costs for compute and memory. AWS and Azure don't do this; they report billing information based only on the total resources consumed by a cloud server. Thus, if you use AWS or Azure, you need to disaggregate the data yourself if you want the level of granularity that GCP provides by default — and doing so is important if you need to make an apples-to-apples comparison of what you spend for both compute and memory across clouds.


The rise of SaaS security teams

The challenge of securing a SaaS environment demands a multifaceted security strategy and that starts with a strong SaaS security team. Providing education in line with employee’s job functions is essential. So, for security teams that means ongoing training and professional development opportunities so they are up-to-date on the latest threats and technologies. Training is particularly important when it comes to the tools they’ll be utilizing in order to fully take advantage of the capabilities offered. A security team is only as good as the tools they are given to work with so companies need to make sure that they’re deploying (and updating) advanced security tools that are tailored to cloud applications. Teams also need standardized processes for incident response, regular security assessments, and compliance monitoring as an established workflow lends itself to consistency across an organization especially with the diverse nature of the SaaS ecosystem. While not specific to setting up a security team, once the team is in place zero trust’s principle of “never trust, always verify” will go a long way to strengthening not only a SaaS security posture but that of the entire organization.


Fostering tech innovators through entrepreneurial engineering education

Cultivating an entrepreneurial mind-set is critical in this educational transformation. It encourages students to look beyond conventional career paths and to consider how they can make societal impacts through innovation. Entrepreneurial development cells and similar initiatives within universities play a crucial role by providing mentorship, resources, and support systems that propel students to explore innovative ideas and bring them to life. These programs are pivotal in aiding students to launch their own ventures, thereby enriching the start-up ecosystem directly. Moreover, there is a growing emphasis on integrating emerging technologies such as artificial intelligence (AI), machine learning, blockchain, and the Internet of Things (IoT) into the engineering curriculum and to some extent this has already happened too. This integration ensures that students are not only consumers of technology but also its creators. Project-based problems utilizing these technologies to address practical problems—such as developing AI-driven healthcare solutions or blockchain-based supply chain enhancements—highlight the tangible impacts of a robust educational framework. 


Grooming Employees for Leadership Roles

Empathy is critical to fostering an inclusive work environment and building a culture of trust. As a leader, one has to empower the team to have the courage to take risks, make tough decisions in the face of adversity, and remove the fear of failure. This is possible with open and honest communication with teams and continuous encouragement to proactively drive ‘difficult’ projects. Also, I believe a well-acknowledged team is a highly motivated team. Recognising achievers in your team keeps them motivated to keep outperforming and also fosters a culture of continuous improvement. While we keep achieving new milestones and grow exponentially as an organisation, it is equally important to recognise our talent and motivate them to grow. ... As a leader, one has to set clear expectations with their teams, provide support and mentoring, create a safe space for experimentation, encourage cross-functional collaboration, empower teams to take decisions, and invest in technology and infrastructure. As an organisation, Welspun Corp encourages its people to keep acquiring new knowledge, refine their skills, and adapt to change. 


Disaster recovery vs ransomware recovery: Why CISOs need to plan for both

Many organizations approach disaster recovery and cyber incident response measures from a compliance perspective. They want to check all the required boxes, which means that sometimes, “they do the bare minimum,” says Igor Volovich, vice president of compliance strategy at Qmulos. While doing this is necessary, it is not sufficient. The better approach, he suggests, would be to treat compliance requirements as a detailed guide and adopt a more holistic view based on data that is automatically collected, analyzed, and reported in real time. This involves, of course, strengthening the security posture, as well as developing or updating a thorough disaster recovery plan. ... When it comes to creating the resilience strategy, Ramakrishnan recommends having separate plans for different potential crises and storing them in physical folders in the network operations center, in addition to electronic copies. “While electronic access is crucial, physical documentation provides a tangible backup and is easily accessible in situations where digital systems may be compromised,” he says.


Preparing Your DevOps Workforce for the Shifting Landscape of Tech Talent

Traditionally, the tech industry placed a high value on academic degrees. However, as the pace of technological change accelerates, the skills required to succeed in this field are evolving rapidly. ... The move towards skills-based hiring is a response to the pragmatic needs of the industry. Hiring managers are increasingly prioritizing candidates with tangible skills and certifications directly applicable to the projects and technologies at hand. This approach opens up opportunities for a wider pool of talent, including those who may have taken non-traditional paths to acquire their skills. ... By prioritizing upskilling and cross-skilling initiatives, companies can cultivate a versatile and adaptable workforce ready to tackle the challenges of emerging technologies. It is much like applying DevOps practices to staff development - being agile with iterative learning and adaptation and continuously updating your skills. ... As we look to the future, one thing is clear: the tech industry's approach to talent is undergoing a profound transformation. Yesterday's methods won't train tomorrow's workforce. 


Here’s When To Use Write-Ahead Log and Logical Replication in Database Systems

Logical replication provides benefits compared to methods such as WAL. Firstly, it offers the advantage of replication, allowing for the replication of tables or databases rather than all changes, which enhances flexibility and efficiency. Secondly, it enables replication, facilitating synchronization across types of databases, especially useful in environments with diverse systems. Moreover, logical replication grants control over replication behavior including conflict resolution and data transformation, leading to accurate data synchronization management. Depending on the setup, logical replication can function asynchronously or synchronously, providing options to prioritize performance or data consistency based on requirements. These capabilities establish replication as a robust tool for maintaining synchronized data in distributed systems. Logical replication presents a level of adaptability for administrators by allowing them to select which data to replicate for targeted synchronization purposes. This feature streamlines the process by replicating tables or databases and reducing unnecessary workload. 


When to ignore — and believe — the AI hype cycle

Established tech incumbents and startups are transforming their technology platforms simultaneously and big technology platform providers are also displaying an incredible amount of agility in adapting. This translates into a much more rapid evolution of the build with gen AI stacks compared to what we saw in the early days of the build with the cloud. If compute and data are the currency of innovation in gen AI, we have to ask ourselves where are startups sustainably positioned versus established tech incumbents who have structural advantages and more access to compute. Higher up in the stack, the opportunity in applications seems quite vast — but given where we are in the hype cycle, the reliability of AI outputs, the regulatory landscape and advancements in cybersecurity posture are key gating factors that need to be addressed for commercial adoption at scale. Lastly, foundation models have achieved the performance they have due to pre-training on internet scale datasets. What still lies ahead to realize the benefits of AI is the ability to assemble large, high-quality datasets to build models in more industry-specific domains. 



Quote for the day:

“If you're doing your best, you won't have any time to worry about failure.” -- H. Jackson Brown, Jr.