Daily Tech Digest - April 16, 2024

How to Build a Successful AI Strategy for Your Business in 2024

With a solid understanding of AI technology and your organization’s priorities, the next step is to define clear objectives and goals for your AI strategy. Focus on identifying the problems that AI can solve most effectively within your organization. These objectives should be specific, measurable, achievable, relevant, and time-bound (SMART). ... By setting well-defined objectives, you can create a targeted AI strategy that delivers tangible results and aligns with your overall business priorities. An AI implementation strategy often requires specialized expertise and tools that may not be available in-house. To bridge this gap, identify potential partners and vendors who can provide the necessary support for your AI strategy.Start by researching AI and machine learning companies that have a proven track record of working in your industry. When evaluating potential partners, consider factors such as their technical capabilities, the quality of their tools and platforms, and their ability to scale as your AI needs grow. Look for vendors who offer comprehensive solutions that cover the entire AI lifecycle, from data preparation and model development to deployment and monitoring.


Internet can achieve quantum speed with light saved as sound

When transferring information between two quantum computers over a distance—or among many in a quantum internet—the signal will quickly be drowned out by noise. The amount of noise in a fiber-optic cable increases exponentially the longer the cable is. Eventually, data can no longer be decoded. The classical Internet and other major computer networks solve this noise problem by amplifying signals in small stations along transmission routes. But for quantum computers to apply an analogous method, they must first translate the data into ordinary binary number systems, such as those used by an ordinary computer. This won't do. Doing so would slow the network and make it vulnerable to cyberattacks, as the odds of classical data protection being effective in a quantum computer future are very bad. "Instead, we hope that the quantum drum will be able to assume this task. It has shown great promise as it is incredibly well-suited for receiving and resending signals from a quantum computer. So, the goal is to extend the connection between quantum computers through stations where quantum drums receive and retransmit signals, and in so doing, avoid noise while keeping data in a quantum state," says Kristensen.


Better application networking and security with CAKES

A major challenge in enterprises today is keeping up with the networking needs of modern architectures while also keeping existing technology investments running smoothly. Large organizations have multiple IT teams responsible for these needs, but at times, the information sharing and communication between these teams is less than ideal. Those responsible for connectivity, security, and compliance typically live across networking operations, information security, platform/cloud infrastructure, and/or API management. These teams often make decisions in silos, which causes duplication and integration friction with other parts of the organization. Oftentimes, “integration” between these teams is through ticketing systems. ... Technology alone won’t solve some of the organizational challenges discussed above. More recently, the practices that have formed around platform engineering appear to give us a path forward. Organizations that invest in platform engineering teams to automate and abstract away the complexity around networking, security, and compliance enable their application teams to go faster.


AI set to enhance cybersecurity roles, not replace them

Ready or not, though, AI is coming. That being the case, I’d caution companies, regardless of where they are on their AI journey, to understand that they will encounter challenges, whether from integrating this technology into current processes or ensuring that staff are properly trained in using this revolutionary technology, and that’s to be expected. As a cloud security community, we will all be learning together how we can best use this technology to further cybersecurity. ... First, companies need to treat AI with the same consideration as they would a person in a given position, emphasizing best practices. They will also need to determine the AI’s function — if it merely supplies supporting data in customer chats, then the risk is minimal. But if it integrates and performs operations with access to internal and customer data, it’s imperative that they prioritize strict access control and separate roles. ... We’ve been talking about a skills gap in the security industry for years now and AI will deepen that in the immediate future. We’re at the beginning stages of learning, and understandably, training hasn’t caught up yet.


Why employee recognition doesn't work: The dark side of boosting team morale

Despite the importance of appreciation, many workplaces prioritise performance-based recognition, inadvertently overlooking the profound impact of genuine appreciation. This preference for recognition over appreciation can lead to detrimental outcomes, including conditionality and scarcity. Conditionality in recognition arises from its link to past achievements and performance outcomes. Employees often feel pressured to outperform their peers and surpass their past accomplishments to receive recognition, fostering a hypercompetitive work environment that undermines collaboration and teamwork. Furthermore, the scarcity of recognition exacerbates this issue, as tangible rewards such as bonuses or promotions are limited. In this competitive landscape, employees may feel undervalued, leading to disengagement and disillusionment. To foster an inclusive and supportive workplace culture, organisations must recognise the intrinsic value of appreciation alongside performance-based recognition. Embracing appreciation cultivates a culture of gratitude, empathy, and mutual respect, strengthening interpersonal connections and boosting employee morale.


Improving decision-making in LLMs: Two contemporary approaches

Training LLMs in context-appropriate decision-making demands a delicate touch. Currently, two sophisticated approaches posited by contemporary academic machine learning research suggest alternate ways of enhancing the decision-making process of LLMs to parallel those of humans. The first, AutoGPT, uses a self-reflexive mechanism to plan and validate the output; the second, Tree of Thoughts (ToT), encourages effective decision-making by disrupting traditional, sequential reasoning. AutoGPT represents a cutting-edge approach in AI development, designed to autonomously create, assess and enhance its models to achieve specific objectives. Academics have since improved the AutoGPT system by incorporating an “additional opinions” strategy involving the integration of expert models. This presents a novel integration framework that harnesses expert models, such as analyses from different financial models, and presents it to the LLM during the decision-making process. In a nutshell, the strategy revolves around increasing the model’s information base using relevant information. 


Unpacking the Executive Order on Data Privacy: A Deeper Dive for Industry Professionals

For privacy professionals, the order underscores the ongoing challenge of protecting sensitive information against increasingly sophisticated threats. That’s important, and shouldn’t be overlooked. Yet the White House has admitted that this order isn’t a silver bullet for all the nation’s data privacy challenges. That candor is striking. It echoes a sentiment familiar to many of us in the industry: the complexities of protecting personal information in the digital age cannot be fully addressed through singular measures against external threats. Instead, this task requires a long-term, thoughtful, multi-faceted approach – one that also confronts the internal challenges to data privacy posed by Big Tech, domestic data brokers, and foreign governments that exist outside of the designated “countries of concern” category. ... The extensive collection, usage, and sale of personal data by domestic entities—including but not limited to Big Tech companies, data brokers, and third-party vendors—poses significant risks. These practices often lack transparency and accountability, fueling privacy breaches, identity theft, and eroding public trust and individual autonomy.


10 tips to keep IP safe

CSOs who have been protecting IP for years recommend doing a risk and cost-benefit analysis. Make a map of your company’s assets and determine what information, if lost, would hurt your company the most. Then consider which of those assets are most at risk of being stolen. Putting those two factors together should help you figure out where to best spend your protective efforts (and money). If information is confidential to your company, put a banner or label on it that says so. If your company data is proprietary, put a note to that effect on every log-in screen. This seems trivial, but if you wind up in court trying to prove someone took information they weren’t authorized to take, your argument won’t stand up if you can’t demonstrate that you made it clear that the information was protected. ... Awareness training can be effective for plugging and preventing IP leaks, but only if it’s targeted to the information that a specific group of employees needs to guard. When you talk in specific terms about something that engineers or scientists have invested a lot of time in, they’re very attentive. As is often the case, humans are often the weakest link in the defensive chain. 


Types of Data Integrity

Here are a few data integrity issues and risks many organizations face: Compromised hardware: Power outages, fire sprinklers, or a clumsy person knocking a computer to the floor are examples of situations that can cause the loss of vital data or its corruption. Security considers compromised hardware to be hardware that has been hacked. Cyber threats: Cyber security attacks – phishing attacks, malware – present a serious threat to data integrity. Malicious software can corrupt or alter critical data within a database. Additionally, hackers gaining unauthorized access can manipulate or delete data. If changes are made as a result of unauthorized access, it may be a failure in data security. ... Human error: A significant source of data integrity problems is human error. Mistakes that are made during manual entries can produce inaccurate or inconsistent data that then gets stored in the database. Data transfer errors: During the transfer of data, data integrity can be compromised. Transfer errors can damage data integrity, especially when moving massive amounts of data during extract, transform, and load processes, or when moving the organization’s data to a different database system.


Sisense Breach Highlights Rise in Major Supply Chain Attacks

Many of the details of the attack are not yet clear, but the breach may have exposed hundreds of Sisense's prominent customers to a supply chain attack that gave hackers a backdoor into the company's customer networks, a CISA official told Information Security Media Group. Experts said the attack suggests trusted companies are still failing to implement proactive defensive measures to spot supply chain attacks - such as robust access controls, real-time threat intelligence and regular security assessments - at a time when organizations are increasingly reliant on interconnected ecosystems. "These types of software supply chain attacks are only possible through compromised developer credentials and account information from an employee or contractor," said Jim Routh, chief trust officer for the software security company Saviynt. The breach highlights the need for enterprises to improve their identity access management capabilities for cloud-based services and other third parties, he said. Security intelligence platform Censys published insights into the Sisense breach Friday.



Quote for the day:

"Success is the progressive realization of predetermined, worthwhile, personal goals." -- Paul J. Meyer

Daily Tech Digest - April 15, 2024

Generative AI Strategy For Enterprise

The guidelines to align with enterprise business Initiatives. Identify the business challenges that require attention. Also, understand the business benefits of AI adoption that are critical for the success of enterprise. Select the targeted use cases and perform the Proof of Concepts (POC) that can deliver desired business and operational outcomes. AI use cases should not be viewed in isolation. AI initiatives and technology should be integrated into existing business processes and workflows to optimize and streamline them. Build value through improved productivity, growth, and new business models. ... Prioritize GenAI usecase initiatives based on highest potential value and feasibility to execute. Implement model development lifecycle that includes products and services, rigorous testing, validation, and documentation. Build Roadmap that provides a plan to deliver the identified GenAI applications by prioritizing and simplifying the actions required to deliver identified initiatives. Create processes for ongoing monitoring and auding of GenAI systems for responsible use of AI to ensure compliance with legal, ethical standards and algorithmic biases.


Do cloud-based genAI services have an enterprise future?

“Given the data gravity in the cloud, it is often the easiest place to start with training data. However, there will be a lot of use cases for smaller LLMs and AI inferencing at the edge. Also, cloud providers will continue to offer build-your-own AI platform options via Kubernetes platforms, which have been used by data scientist for years now,” Sustar said. “Some of these implementations will take place in the data center on platforms such as Red Hat OpenShift AI. Meanwhile, new GPU-oriented clouds like Coreweave will offer a third option. This is early days, but managed AI services from cloud providers will remain central to the AI ecosystem.” And while smaller LLMs are on the horizon, enterprises will still use major companies’ AI cloud services for when they need access to very large LLMs, according to Litan. Even so, more organizations will eventually be using small LLMs that run on much smaller hardware, “even as small as a common laptop. “And we will see the rise of services companies that support that configuration along with the privacy, security and risk management services that will be required,” Litan said. 


6 bad cybersecurity habits that put SMBs at risk

Cybersecurity can’t be addressed with technology alone and in many ways it’s a human problem, according to Sage. “Technology enables attacks, technology facilitates preventing attacks, technology helps with cleaning up after an attack, but that technology requires a knowledgeable human to be effective, at least for now,” they say. This also feeds into other problems, which are a lack of budget and no dedicated responsibility for cybersecurity. “These are significant challenges for SMBs, leaving them without guidance on compliance frameworks and a clear direction, and reliant on providers for support,” says Iqbal. ... Adopting good cyber hygiene habits should be a no brainer, although it can be a hit and miss. For instance, allowing the use of weak passwords is all too common, according to Iqbal. He’s also found instances where the default password for logins has not been changed or all the passwords for security servers are changed to a single password and there isn’t a separate administrative password. “The admin account is the most lucrative account threat actors are looking to compromise. It just takes one compromise and then the keys to the kingdom are flung open to all your potential threat actors,” he says.


Generative AI is coming for healthcare, and not everyone’s thrilled

While generative AI shows promise in specific, narrow areas of medicine, experts like Borkowski point to the technical and compliance roadblocks that must be overcome before generative AI can be useful — and trusted — as an all-around assistive healthcare tool. “Significant privacy and security concerns surround using generative AI in healthcare,” Borkowski said. “The sensitive nature of medical data and the potential for misuse or unauthorized access pose severe risks to patient confidentiality and trust in the healthcare system. Furthermore, the regulatory and legal landscape surrounding the use of generative AI in healthcare is still evolving, with questions regarding liability, data protection and the practice of medicine by non-human entities still needing to be solved.” Even Thirunavukarasu, bullish as he is about generative AI in healthcare, says that there needs to be “rigorous science” behind tools that are patient-facing. “Particularly without direct clinician oversight, there should be pragmatic randomized control trials demonstrating clinical benefit to justify deployment of patient-facing generative AI,” he said. 


State of the CIO, 2024: Change makers in the business spotlight

The push for innovation requires a steady hand, and CIOs are stepping in to provide guidance, including orienting the greater enterprise to the potential — and the pitfalls — of new technologies like AI. Eighty-five percent of respondents to the 2024 State of the CIO survey view the CIO as a critical change maker and a much-needed resource given the pace and scale of change, amplified by the frenzy around AI. “With all the hype of AI and the velocity at which technology is evolving, my focus as a CIO continuously and relentlessly has to be through the lens of strategy, execution, and culture,” says Sanjeev Saturru, CIO at Casey’s, the third-largest convenience store chain in the United States. ... “Eighteen months ago, AI was an interesting topic, but today, if you don’t have a plan to elevate experience via AI you are behind,” says LaQuinta. “We have a maniacal focus on maximizing the contribution of advanced intelligence, supported by AI. That could be making information available at the click of a button to help advisors be more efficient with their time or to serve clients better in a hyperpersonalized way.”


Cloned Voice Tech Is Coming for Bank Accounts

At many financial institutions, your voice is your password. Tiny variations in pitch, tone and timbre make human voices unique - apparently making them an ideal method for authenticating customers phoning for service. Major banks across the globe have embraced voice print recognition. It's an ideal security measure, as long as computers can't be trained to easily synthesize those pitch, tone and timbre characteristics in real time. They can. Generative artificial intelligence bellwether OpenAI in late March announced a preview of what it dubbed Voice Engine, technology that with a 15-second audio sample can generate natural-sounding speech "that closely resembles the original speaker." While OpenAI touted the technology for the good it could do - instantaneous language translation, speech therapy, reading assistance - critics' thoughts went immediately to where it could do harm, including in breaking that once ideal authentication method for keeping fraudsters out. It also could supercharge impersonation fraud fueling "child in trouble" and romance scams as well as disinformation.


Data pipelines for the rest of us

In some ways, Airflow is like a seriously upgraded cron job scheduler. Companies start with isolated systems, which eventually need to be stitched together. Or, rather, the data needs to flow between them. As an industry, we’ve invented all sorts of ways to manage these data pipelines, but as data increases, the systems to manage that data proliferate, not to mention the ever-increasing sophistication of the interactions between these components. It’s a nightmare, as the Airbnb team wrote when open sourcing Airflow: “If you consider a fast-paced, medium-sized data team for a few years on an evolving data infrastructure and you have a massively complex network of computation jobs on your hands, this complexity can become a significant burden for the data teams to manage, or even comprehend.” Written in Python, Airflow naturally speaks the language of data. Think of it as connective tissue that gives developers a consistent way to plan, orchestrate, and understand how data flows between every system. A significant and growing swath of the Fortune 500 depends on Airflow for data pipeline orchestration, and the more they use it, the more valuable it becomes. 


The 5 Steps to Crafting an Impactful Enterprise Architecture Communication Strategy

To successfully convey the significance of enterprise architecture within an organization, a structured and strategic approach to communication is crucial. Here’s an overview of the five pivotal steps to create an impactful enterprise architecture communication strategy: Clarify Strategic Objectives: Define clear-cut enterprise architecture objectives that align with the broader vision of the organization. ... Contextual Understanding: Assess the current state of enterprise architecture in your organization and the specific goals you seek to achieve through this communication strategy. ... Audience Insights: Segment your internal audience to understand the varying levels of EA awareness and the distinct needs across departments. ... Selecting Suitable Communication Tools: With a plethora of digital tools available, it’s essential to choose those that best align with your enterprise architecture communication goals. ... Developing the EA Communication Plan: Integrate all insights and choices into a coherent communication plan that outlines how enterprise architecture will be communicated across the organization. 


A Call for Technology Resilience

A major inflection point in application development has been the adoption of Agile. With iterative, Agile application development, an application or system is never finished. It’s continuously changing as business conditions and circumstances change. Both users and IT accept this iterative development without endpoints. On the other hand, end points (and more of them!) in IT projects also can foster technology resilience. They achieve resilience because a large project that gets interrupted by an immediate and overriding business necessity is more easily paused if it is structured as a series of mini projects that deliver incremental functionality. ... Your network goes down under a malware attack, but your network guru has just left the company for another opportunity. Do you have someone who can step in and do the work? Or, what if your DBA leaves? How long can you delay defining an AI data architecture, and will it harm the company competitively? To achieve IT roster depth, staff must be trained in new responsibilities, or at least cross-trained in different roles that they can assume if needed.


SaaS Tools: Major Threat Vector for Enterprise Security

When considering SaaS security risks, organizations have to take into account whether the SaaS provider is an established player or a startup, Lobo said. Established players have the resources to invest heavily in the security of their applications, and are less vulnerable to code injection attacks. Organizations do not have the auditing powers to measure an established vendor's security credentials and have no recourse but to trust the vendor. But when it comes to dealing with smaller companies, organizations can scrutinize encryption and cloud security practices, evaluate supply chains, check for vulnerabilities in the application code and conduct frequent security assessments. Lobo said many organizations today rely on services such as SecurityScorecard, UpGuard and similar companies that keep track of vulnerabilities in enterprise software and alert users, giving them the opportunity to patch third-party software prior to exploitation. Shankar Ramaswamy, solutions director at Bangalore-based IT consultancy giant WiproThe only way to do great work is to love what you do. –Steve Jobs, said organizations using third-party SaaS applications must focus on three major aspects - strengthen endpoint security, minimize the application' access to internal resources and replace passwords with multi factor authentication.



Quote for the day:

"The only way to do great work is to love what you do." -- Steve Jobs

Daily Tech Digest - April 14, 2024

Why small language models are the next big thing in AI

The complexity of tools and techniques required to work with LLMs also presents a steep learning curve for developers, further limiting accessibility. There is a long cycle time for developers, from training to building and deploying models, which slows down development and experimentation. ... Enter small language models. SLMs are more streamlined versions of LLMs, with fewer parameters and simpler designs. They require less data and training time—think minutes or a few hours, as opposed to days for LLMs. This makes SLMs more efficient and straightforward to implement on-site or on smaller devices. One of the key advantages of SLMs is their suitability for specific applications. Because they have a more focused scope and require less data, they can be fine-tuned for particular domains or tasks more easily than large, general-purpose models. This customization enables companies to create SLMs that are highly effective for their specific needs, such as sentiment analysis, named entity recognition, or domain-specific question answering. 


Navigating the AI revolution

Regulatory bodies like the European Union (EU) closely monitor data center energy usage through the Energy Efficiency Directive. This directive mandates that data center operators with a total rated power of 500 kilowatts or above are required to publicly report their energy performance data annually. An integral aspect of sustainability involves addressing 'Scope 3 emissions' under the Greenhouse Gas Protocol. While there’s a significant focus on Scope 1 and 2 responsibilities, which measure emissions from a data center’s own operations and electricity usage, Scope 3 emissions encompass the broader environmental impact of indirect emissions generated by data center operations outside its premises. ... The rise of AI technologies presents challenges and opportunities for the data center industry. By incorporating track busway solutions into data center infrastructure, operators can address the challenges posed by escalating power densities while contributing to significantly reducing Scope 3 emissions. This will position data centers to meet the demands of AI-driven workloads while ensuring their facilities’ continued reliability and energy efficiency.


Large language models generate biased content, warn researchers

Dr. Maria Perez Ortiz, an author of the report from UCL Computer Science and a member of the UNESCO Chair in AI at UCL team, said, "Our research exposes the deeply ingrained gender biases within large language models and calls for an ethical overhaul in AI development. As a woman in tech, I advocate for AI systems that reflect the rich tapestry of human diversity, ensuring they uplift rather than undermine gender equality." The UNESCO Chair in AI at UCL team will be working with UNESCO to help raise awareness of this problem and contribute to solution developments by running joint workshops and events involving relevant stakeholders: AI scientists and developers, tech organizations, and policymakers. Professor John Shawe-Taylor, lead author of the report from UCL Computer Science and UNESCO Chair in AI at UCL, said, "Overseeing this research as the UNESCO Chair in AI, it's clear that addressing AI-induced gender biases requires a concerted, global effort. This study not only sheds light on existing inequalities but also paves the way for international collaboration in creating AI technologies that honor human rights and gender equity. ..."


Enterprise of the Future: Disruptive Technology = Infinite Possibilities

In the current state, employees are required to navigate multiple enterprise application interfaces and browse many systems to get information for their day-to-day activities. It could be a simple activity such as fetching clarification on leave policies, checking for the leave balance or reporting an incident to have a laptop or printer issue fixed. To accomplish their daily responsibilities, employees often end up searching for standard operating procedures and other relevant data and knowledge. In general, to perform their day-to-day tasks, they must go through multiple enterprise application interfaces. This often results in a steep learning curve for the employees, necessitating them to remember where specific data and information resides, to understand the functionality of multiple enterprise applications and master the way they operate. This fragmentation of information across different data sources and the labyrinth of applications that need to be navigated to perform daily activities leads to inefficiencies and confusion that ultimately impact productivity. Moreover, changes within an organisation necessitate training on new systems and new ways of working, requiring employees to learn and adapt continuously.


The state of open source in Europe

Europe is renowned for regulations, and the past year has resulted in several large policy frameworks that influence tech — the Cyber Resilience Act (CRA), the Product Liability Directive, and the EU AI Act. With lots of information to digest and react to, both FOSDEM and SOOCON held deep-dive sessions. Over the past year, the CRA has been of the most concern to open-source communities, as it puts responsibility for harm caused by software into the hands of creators. For open-source software, this is complicated, as who is really responsible? The creator of the open-source software or its implementor? Many open-source projects have no legal entity that anyone can hold “responsible” for problems or harm. ... “‘Open-source software and hardware’ used to be enough to encompass the community and its aims and concerns. Now it’s ‘Open-source software, hardware, and data’.” An Open data movement, that aims to keep public data as freely accessible as possible, has existed for some time. The EU alone has nearly 2 million data sets. However, this past year saw the open-source community have to care about the openness of data in completely new ways.


Elemental Surprise: Physicists Discover a New Quantum State

“The search and discovery of novel topological properties of matter have emerged as one of the most sought-after treasures in modern physics, both from a fundamental physics point of view and for finding potential applications in next-generation quantum science and engineering,” said Hasan. “The discovery of this new topological state made in an elemental solid was enabled by multiple innovative experimental advances and instrumentations in our lab at Princeton.” An elemental solid serves as an invaluable experimental platform for testing various concepts of topology. Up until now, bismuth has been the only element that hosts a rich tapestry of topology, leading to two decades of intensive research activities. This is partly attributed to the material’s cleanliness and the ease of synthesis. However, the current discovery of even richer topological phenomena in arsenic will potentially pave the way for new and sustained research directions. “For the first time, we demonstrate that, akin to different correlated phenomena, distinct topological orders can also interact and give rise to new and intriguing quantum phenomena,” Hasan said.


The cloud is benefiting IT, but not business

According to a recent McKinsey survey that engaged about 50 European cloud leaders, the benefits of cloud migration still need to be recovered. In other words, cloud migrations are not as universally beneficial as we’ve been led to believe. I’m not sure why this is news to anyone. The central promise of cloud computing was to usher in a new era of agility, cost savings, and innovation for businesses. However, according to the McKinsey survey, only one-third of European companies actively monitor non-IT outcomes after migrating to the cloud, which suggests a less optimistic picture. Moreover, 71% of companies measured the impact of cloud adoption solely through the prism of IT operational improvements rather than core business benefits. This imbalance raises a critical question: Are the primary beneficiaries of cloud migration just the tech departments rather than the more comprehensive business entities they’re supposed to empower? Cloud computing technology is often associated with business agility and new revenue generation, but just 37% report cost savings outside of IT. Only 32% report new revenue generation despite having invested hundreds of millions of dollars in cloud computing.


Securing Mobile Apps: Development Strategies and Testing Methods

Ensuring secure data storage is crucial in today's technology landscape, especially for apps. It's vital to protect sensitive information and financial records to prevent unauthorized access and data breaches. Secure data storage includes encrypting information both at rest and in transit using encryption methods and secure storage techniques. Moreover, setting up access controls, authentication procedures, and conducting regular security checks are essential to uphold the confidentiality and integrity of stored data. By prioritizing these data storage practices and security protocols, developers can ensure that user information remains shielded from risks and vulnerabilities. Faulty encryption and flawed security measures can lead to vulnerabilities within apps, putting sensitive data at risk of unauthorized access and misuse. If encryption algorithms are weak or not implemented correctly, encrypted data could be easily decoded by actors. Poor key management, like storing encryption keys insecurely, worsens these threats. Additionally, security protocols lacking proper authentication or authorization controls create opportunities for attackers to bypass security measures.


How Do Open Source Licenses Work? The Ultimate Guide

The type of license a project creator chooses should be determined by legal counsel, as entering the realm of copyright law requires alignment with the project creators’ intentions. The type of license, which only qualified legal counsel can provide, should correspond to what the project creators want to achieve. Meanwhile, among popular licenses, the MIT License is comparatively permissive. It allows users to fork or copy the code freely, offering flexibility in how they utilize it. This stands in contrast to so-called “copyleft” licenses, like the GNU General Public License, which impose more stipulations. ... The process of changing a license can be complicated and challenging, emphasizing the importance of selecting the right license initially. When altering an open source project’s license, the new license must often comply with or be compatible with the original license, depending on its terms. Ensuring that the changes align with the copyright holders’ stakes in the project is crucial. This intricate process requires the guidance of competent legal counsel. It’s advisable to establish proper licensing from the project’s outset to avoid complexities later on.


6 Things That Will Tell You If You Are Destined for Leadership

Self-awareness about personality traits naturally leads to increased understanding and empathy when working with people who possess different traits than us. Instead of being unable to make sense of others' actions, we can analyze them and relate them to their inherent personality preference. This ability can prevent natural triggers towards the unknown from judging, blaming or taking things personally. ... Knowing about personality preferences is essential, but it's also crucial to be aware of how people prefer to handle change. Change is an event, but how change affects people is based on their personality traits. Some people embrace change so much they want it to be fast and vast. However, they tend to have difficulty staying focused and completing tasks. Others have an adverse reaction to change. They honor tradition and enjoy predictability.  ... Everyone has experienced negative situations, such as making the wrong choices, reacting to someone's words or behaviors with negative emotions or getting sucked into self-sabotaging thoughts. When you can bounce back from these situations with a positive mindset and be productive, you are exercising resilience.



Quote for the day:

"With desperation comes frustration. With determination comes purpose achievement, and peace." -- James A. Murphy

Daily Tech Digest - April 12, 2024

Architecture is about tradeoffs. It is about spending money on one thing over another. It is about decisions. So when you tell me to develop productivity I think that is a great measure. But I also start wondering about quality. About satisfaction. Is that productivity a measure of one person? Every person? What toolset did they use? The same goes with content generation. An AI image is neat at first? But do we get tired of them? How do I measure the value of human created? Is there profit in that? Order management, electricity use, all of these measures are valuable. So when you hear about an AI business case… do you have a business case? Are the benefits REAL? ... Everything comes with pros/cons and we need a system in place to handle this change rate. This is true of all major human endeavors. Think of child workers during industrialization. Or the horrible cost to humanity of the intensity of urbanization and how it has endangered our planet. Only now are we coming to grips with all of that structural complexity. And even that is going to require decades more commitment. Technology and, specifically, AI is no different. 


The Pitfalls of Periodic Penetration Testing & What to Do Instead

While periodic penetration testing can provide a snapshot of your organization’s security posture, it often fails to account for the dynamic nature of cyber threats. Organizations must continuously test their security measures to effectively mitigate risks to identify and neutralize emerging threats in real-time. Organizations can leverage various approaches and tools to implement continuous cybersecurity testing, such as the Atomic Red Team by Red Canary, an open-source library of tests mapped to the MITRE ATTACK framework that security teams can use to simulate adversarial activity and validate their defenses. These tools can help prioritize and mitigate potential cyber-attacks by automating security testing and providing valuable insights into adversary tactics and techniques. Endpoint security testing and firewall testing are excellent starting points for implementing continuous cybersecurity testing. By simulating phishing emails, running PowerShell commands at endpoints, and monitoring VPN logins at the firewall level, organizations can proactively identify potential vulnerabilities and mitigate them before cyber attackers can exploit them. 


Generative AI Sucks: Meta’s Chief AI Scientist Calls For A Shift To Objective-Driven AI

Unlike current AI, which excels in narrow domains without grasping causality, objective-driven AI would be capable of causal reasoning and understanding the relationships between actions and outcomes. This shift would allow AI to plan and adapt strategies in real time, grounded in a nuanced comprehension of the physical and social world. Objective-driven AI is not just an incremental improvement but a leap toward machines that can truly collaborate with humans, offering insights, generating solutions, and understanding the broader impact of their actions. This vision represents a significant shift towards creating AI that can navigate the complexity of the real world with intelligence and purpose. ... Despite these challenges, LeCun is optimistic about the future, firmly believing that AI will eventually surpass human intelligence across all domains. This conviction is not grounded in wishful thinking but in a clear-eyed assessment of technological progress and the potential for groundbreaking scientific discoveries. However, LeCun also emphasizes that this evolution will not happen overnight or without a radical rethinking of our current approaches to AI development.


Strategies to cultivate collaboration between NetOps and SecOps

Collaborative culture starts at the top. The leaders of these teams need to collaborate and communicate consistently. They cannot have a turf war over each team’s roles and must understand each team’s responsibilities. Whether it’s shadowing a member of the other team for a day or taking opportunities to get to know other teams outside of work, establishing a collaborative culture is an important long-term investment for mutual success. ... AI and automation will blur the lines between these two teams, as projects focused on these elements are ones that can be tackled together. For example, having your vulnerability management tool automatically open tickets for other IT teams can create a feeling that the security team is dumping vulnerabilities over the wall.  ... The SecOps team tends to secure the budget as they take in risks to the company. For instance, if a project is done how does it reduce risks and if the project is not done, what risks does the company retain? The automation and AI tools are using network traffic (packet data) to create workflows/automation and AI tools are using this data to feed into Large Language Models. Both teams can utilize this AI LLM to solve network and security issues.


Down with Detection Obsession: Proactive Security in 2024

Now, as boards of directors and C-suites are expected to be more security savvy, they are asking important risk questions of their CISOs: Given all this spending on finding our problems, are we secure? Are we better off than we were a year ago or two years ago, or three years ago? And few security executives can answer those questions with comfort, because historically they were not focused on addressing risk, they were focused on discovering the risk. As time goes on and the security leader’s role becomes more business-centric, the benefits of taking a more proactive approach to security will continue to grow and shine. For example, the role of vulnerability management in providing improved risk reduction, achieving regulatory compliance, and cost savings. By actively seeking and addressing vulnerabilities, organizations can significantly reduce their overall attack surface, minimizing their chances of security breaches, data leaks and more. Many industries, like health care and financial services, have strict regulations governing the protection of sensitive data.


Agile development can unlock the power of generative AI - here's how

"The beauty of Agile is you see the fruits of your work quicker. You get feedback. And that's true with innovation generally -- the faster you can speed up cycle times, the better." Hakan Yaren, CIO at APL Logistics, said to ZDNET that another benefit of Agile is that it's well-suited to the modern digital environment. Analyst Gartner suggested that 80% of technology products and services this year will be built by people who are not technology professionals. Yaren said Agile -- with its focus on joined-up thinking and cross-business approaches -- is a good fit for the decentralized nature of modern IT. "With AI and cloud, the barriers to entry are becoming lower and people in the business are making IT decisions," he said. "Agile is the right methodology to deal with many of these processes because of the speed of change." However, Yaren has a warning for IT professionals: The complexities you face could increase as more line-of-business employees test emerging technologies. "Trying to connect these solutions, and making sure they're secure, reliable, and you can connect the dots across them, is becoming even more challenging," he said.


The benefits of leveraging hybrid cloud automation

To optimise hybrid cloud architecture, most experts endorse automation, given its flexibility, simplicity and scalability. They believe automation is necessary to draw some of the benefits of the cloud back into the on-premises systems and the hybrid architecture. Automation can ensure a more seamless way for end-users to requisition an organisation’s services, regardless of its location. As more applications move into hybrid and multi-cloud environments, companies can explore several ways to automate manual processes taking place in the cloud. Crucial cloud automation aspects cover deployment, provisioning, compliance, configuration management, scaling, and more. Hybrid cloud automation examples include establishing a network in the cloud and configuring cloud servers. Cloud automation can also be used for managing server capacity, spinning up new environments and resources, configuring software and systems, rolling out software configurations whenever required, taking systems online and offline as needed to balance the load, scaling across data centres, and moving into a public cloud environment when handling front-end web services or high workloads that are on- or off-premises.


Why strategists should embrace imperfection

We’re seeing paralysis as people wait for some kind of equilibrium or stasis to reemerge. Or they get nervous and leap before they look, whether it’s an acquisition or some other move. We wanted to lay out a different path that involves confidently stepping into risk by using a set of six mindsets that we put under the broad heading of imperfectionism. Imperfectionism sounds like a bad thing, but what we mean is accepting the ambiguity of not having perfect knowledge before making strategic moves. ... The kind of uncertainty that we face today really is twofold. One is the type we see in the newspaper, which is economic uncertainty, external shocks like the war in Ukraine. But there’s a much more fundamental kind of uncertainty we face now, which is very rapid technological change. Artificial intelligence, automation, programmable biology, and other disruptions are blurring industry boundaries and what it means to be a competitor in a particular industry. We’re also seeing the rise of supercompetitors like Apple, Amazon, and Google, which can operate across many industry spaces. 


What the American Privacy Rights Act Could Mean for Data Privacy

For companies that collect and monetize consumer data, the APRA could mean making changes to the way they do business. The APRA sets out requirements for issues like data minimization, transparency, consumer choice and rights, data protection, and executive responsibility. “It basically means that now they’re going to be able to collect less data: good for consumers and not so good if you're a company that needs all that data,” Antonio Sanchez, principal cybersecurity evangelist at Fortra, a cybersecurity and automation software company, tells InformationWeek. The draft legislation drills down to data privacy at an operational level. For example, it requires covered entities to appoint a privacy or data security officer or officers. “There is a real sense that a significant part of managing a modern privacy program is not found in the rules themselves but in the operation that gives life to those rules,” says Hughes. If the APRA goes into effect, covered entities will have 180 days to comply with its requirements. Non-compliance after that timeline could be met with enforcement action. 


Data Stewardship Best Practices

Business leaders must understand what makes data stewards successful in order to find the ideal candidates for the role. Johnson outlined some of the characteristics best suited for stewards. Coming from both business and IT: Many times, data stewards do best when they have a background in both technology and line-of-business department work. Johnson referred to them as “purple people” – having skills and experience spanning these two different job positions. Data stewards should be multiskilled, as well as “bilingual” and “bicultural” ... Acting as bridges: Data stewards should be able to translate both simple and complex information and communicate it in written or oral form. Johnson recommended that they also have a good sense of objectivity, distinguishing fact from fiction, and be able to envision what challenges and issues a company might face in the future. Excited by data: Thinking globally and participating in an influence culture, data stewards should get immersed in the ideas surrounding good Data Governance and better data handling. “When you’re talking to somebody, and they get really excited about data and their eyes light up, and they’re all energized and stuff, it’s a good sign – they might be fit for a steward role,” Johnson said.



Quote for the day:

"I find that the harder I work, the more luck I seem to have." -- Thomas Jefferson

Daily Tech Digest - April 10, 2024

Time to rethink cloud strategies in the AI era

The proliferation of AI technologies is busting datacenter boundaries, as running data close to compute and storage capabilities often offers the best outcomes. No workload embodies this more than GenAI, whose large language models (LLMs) require large amounts of compute processing. While it may make sense to run some GenAI workloads in public clouds – particularly for speedy proof-of – concepts, organizations also recognize that their corporate data is one of the key competitive differentiators. As such, organizations using their corporate IP to fuel and augment their models may opt to keep their data in house – or bring their AI to their data – to maintain control. The on-premises approach may also offer a better hedge against the risks of shadow AI, in which employees’ unintentional gaffes may lead to data leakage that harms their brands’ reputation. Fifty-five percent of organizations feel preventing exposure of sensitive and critical data is a top concern, according to Technalysis Research. With application workloads becoming more distributed to maximize performance it may make sense build, augment, or train models in house and run the resulting application in multiple locations. 


Zero Trust for Legacy Apps: Load Balancer Layer Can Be a Solution

There are a number of specific strengths inherent to deploying zero trust at the load balancer layer via SAML. Implementing zero trust at the load balancer layer allows organizations to enforce a unified access control mechanism for all applications. This ensures consistent security enforcement across diverse technological platforms, and extends to internal nodes policing East-West traffic or externally to cloud native service networking and partner APIs. Certificate management and rotation is a considerable pain point for cloud native applications, let alone for hybrid constellations of applications that might range from a few months old to 30 years old. Load balancers natively manage TLS certificates, offering a centralized point for efficient certificate management that is relatively application agnostic. This centralization not only eases the administrative burden but also enhances security by ensuring timely certificate renewal and efficient handling of encryption/decryption processes. By moving zero trust into an infrastructure point that is already integrated with all other parts of your infrastructure, this approach significantly reduces the complexity associated with modifying each application individually to align with zero trust principles. 


Is the power of people skills enough to keep gen AI in check?

The first area of interest is with Copilot technologies in the context of integrated development environments that the company is already using. “First, we optimize the individual,” he says, using gen AI to make developers more productive. But it’s not about reducing headcount. “My issue isn’t that we have too many developers,” he says. “It’s how we can go faster. I have to compete harder on brain power in a market that’s growing quickly. I’m looking to turn every developer into the single most productive engineer on the team.” And even if the engineers do get dramatically more productive, he says, there’s a big backlog of work the company wants to get done. But just moving faster isn’t enough, he says. Without communication skills and the curiosity needed to find out why things are being done, those productivity benefits can easily go to waste. “I can produce 10 times more useless garbage,” he says. The company has three full-time people who create internal training materials, as well as vet third-party training providers. “We’ve actually made significant investments in learning and development across a variety of domains,” Merkel says. “Core leadership skills is one.”


Why are many businesses turning to third-party security partners?

As organizations weigh the cost of security solutions alongside the rising cost of experienced employees, some are electing to prioritize spending in other areas, forgoing software licenses in favor of third-party partnerships. While moving from an in-house security program to one that relies on outside partners can represent a significant shift in mentality for many organizations, a growing number have found that working with third-party experts can help them secure their systems in a more effective—and scalable—manner. As the threat landscape continues to evolve at a rapid pace, no longer having to track and account for each new development can free up substantial time and resources for organizations. Another factor driving organizations toward external partnerships is the challenge of application onboarding. Enterprises use a massive number of software solutions, cloud services, and other applications, and ensuring those applications are properly configured and protected can be a challenge. As data privacy and security regulations continue to arise in a wide range of jurisdictions, it’s increasingly critical for today’s businesses to clearly demonstrate that they are effectively managing and protecting data within their applications.


Why global warnings about China’s cyber-espionage matter to CISOs

Chinese APTs have penetrated networks of companies providing goods and services to the defense sector, a leading equipment provider of 5G network equipment, and entities involved in wireless technology. Those compromised not only permit the pilfering of intellectual property, but China is also able to leverage their acquired knowledge or capability to continue to engage in both internal and external efforts to silence those in dissent of the current government. We have learned of the external effort largely through the various arrests and prosecutions of individuals, both Chinese nationals and those whom they have suborned to do their bidding. This effort has a moniker — Operation Fox Hunt. This operation was ordered created by President Xi Jinping in 2014. China has had varying degrees of success in its intimidation and coercion methodologies. FBI Director Christopher Wray described this operation as “a sweeping bid by Xi to target Chinese nationals who he sees are threats and who live outside of China, across the world. We’re talking about political rivals, dissidents, and critics seeking to expose China’s extensive human rights violations.”


Maximizing Business Value with Generative AI

As C-Suite leaders begin to understand GenAI, they are starting to uncover some questions: Which use cases will deliver the most value for my business? And how do we transition from a Proof of Concept (PoC) to full-scale implementation or enterprise-level deployment? A lot of the work currently remains in the PoC stage, though some industries are ahead of the curve, such as chatbots for HR and legal contracts, which have become relatively common. So, now what remains to be seen is how enterprises move toward widespread adoption by integrating GenAI into other business processes. To move from the PoC to the deployment stage, organizations must identify their strategy, as we covered earlier, as well as the use cases with high impact. Prioritizing these use cases based on their impact, cost, data readiness, and resistance to adoption is essential. Becoming familiar with the limitations and capabilities will also be important for decision-makers. A roadmap must be developed, and you must leave room for the possibility of failure. Once this is done, various PoCs and pilots can be launched, based on the problems an organization genuinely wants to solve. Additionally, transparency with your internal stakeholders is key. 


How to answer “why should we hire you?” in a job interview

In an ever-increasingly automated world, problem-solving and critical thinking are more important than ever. Here’s your chance to place yourself as a solution to current challenges. First state your understanding of the big issue you see the company or industry is facing, be that tech disruption, changing customer preferences, in-house inefficiencies, or something else. Next, state the transferrable experience that would help solve this. For example, “In my current role, I led a cross-functional team that delivered an AI integration that added value to our customers, which had a shorter time-to-market, and helped to increase product subscriptions by 12 per cent in three months.” Lastly, bring it back to this organisation. “For this role, I’m very interested to leverage this experience and to collaborate with different teams to find a solution to X that works for the company in the short and long-term.” ... Finally, show how you’ll fit in, and this doesn’t mean highlighting how you went to the same school or uni as someone in the C-suite. Here you should focus on the company’s values, and how they align with your own experience. Look for natural fits like customer focus, transparency, collaboration, or trust.


Securing Open Source Software, the Cyber Resilience Act Way

The European Union (EU) figured this out a while back. In its Cyber Resilience Act (CRA), it asked the open source community to establish common specifications for secure software development. The Eclipse Foundation and a host of other leading open source organizations, including the Apache Software Foundation, Blender Foundation, OpenSSL Software Foundation, PHP Foundation, Python Software Foundation and the Rust Foundation, are up for the challenge. The Eclipse Foundation is spearheading the effort to create a unified framework for secure software development. The foundations and allies are doing this via a new working group, established under the Eclipse Foundation Specification Process. The collaboration is spurred by more than regulatory compliance. In an era where open source software is pivotal to modern society, the imperative for safety, reliability and security in software has never been more critical. As Arpit Joshipura, the Linux Foundation‘s senior VP of networking, said at the Open Source Summit Europe in Bilbao, Spain, last year, “We must look at the end goal. The end goal for all of us is the same. We want to secure software, and we want to secure open source software.”


7 Top IT Challenges in 2024

“Government agencies at all levels are issuing an increasing number of regulations or mandates that need to be complied with. Some are inconsistent, some are duplicative but require separate reporting. They all have penalties for non-compliance so that creates liability concerns that shifts the focus from security and compliance,” says Scott Algeier, executive director of industry association Information Technology-Information Sharing and Analysis Center (IT-ISAC). “Security and compliance are not the same thing, so you may need to make additional investments to be both secure and compliant.” ... Return on investment is a key metric for financial services companies. However, after years of regulation, mergers, and growth, technology estates have become bloated and underperforming. As a result, financial institutions want technology that is user-friendly, value-drive and rapidly adaptable to new technologies like AI. “To accomplish this goal, CIOs and CTOs are facing the need to streamline enterprises by reducing spans and layers, increasing reuse of architectural patterns and ultimately increasing [the] productivity of their organizations,” says Fredric Cibelli.


US Bipartisan Privacy Bill Contains Cybersecurity Mandates

The bill's main focus is on creating rights of access and correction and allowing consumers the right to opt out from their data being used for targeted advertising. It would prohibit corporations from retaliating against individuals for exercising their opt-out rights, such as by denying service or charging different rates. It would also create oversight requirements for large companies with minimum revenues of $250 million that use decision-making algorithms, including algorithms that facilitate human decision-making. Those companies would need to annually assess their algorithms for potential biases and evaluate them for bias prior to putting them into production. Annual assessment would be publicly available and transmitted to the Federal Trade Commission. Consumers would have a right to opt out of algorithmic assessment in matters such as access to housing, employment, education, healthcare and financial activities. The FTC would publish guidance on how to comply with that section within two years. Individuals could sue companies for violations of most sections of the act. This would preempt the bevy of state data privacy laws that have come up in recent years, including in California.



Quote for the day:

“People are not lazy. They simply have important goals – that is, goals that do not inspire them.” -- Tony Robbins

Daily Tech Digest - April 08, 2024

Streamlining application delivery and mitigating risks for critical infrastructure

The emphasis on cloud and edge computing introduces challenges in orchestrating seamless application delivery—the initial hurdle in effectively packaging applications for efficient deployment, installation, and execution across various computing environments. For instance, food delivery platforms, such as Zomato or Swiggy, require timely system updates for operational efficiency. The second challenge involves addressing latency and distribution unreliability, especially in scenarios where data transfer delays or inconsistent connectivity may impede the seamless and efficient distribution of applications across networks. Therefore, reliability in application upgrades becomes imperative to counter potential disruptions caused by device issues. The third challenge involves maintaining application reliability, which requires continuous performance monitoring. ... The interconnectedness of supply chain applications necessitates a proactive approach to managing complexities, such as addressing software risks. It involves creating a comprehensive bill of materials and recognising dependencies crucial for bundling software into devices or applications.  


How can the energy sector bolster its resilience to ransomware attacks?

For energy companies, this means undertaking systematic vulnerability assessments and penetration testing, with a specific focus on applications that interface between IT and OT systems. It also requires adopting a comprehensive security strategy that includes routine security monitoring, patch management and network segmentation, and implementing rigorous incident reporting and response. Once the fundamentals are in place, energy providers should explore more advanced technologies and automation opportunities that can help reduce the time between detection and response, such as AI-powered tools that can actively monitor the network in real-time to detect anomalies and predict potential threat patterns. ... In addition to technological defenses, organizations should also focus on the human element as phishing and social engineering attacks keep targeting employees and third-party contractors and continue to be effective methods for initial intrusion. Training programs that enhance employee awareness of these and other tactics are essential, while regularly updated sessions can help staff identify and respond to potential threats thereby reducing the likelihood of a successful attack.


Implementing AI Ethics in Business Strategy

In the realm of AI ethics, monitoring and evaluation play a crucial role in ensuring continuous improvement and alignment with ethical standards. By consistently monitoring the outcomes of AI algorithms and evaluating their impact on various stakeholders, organizations can proactively identify ethical concerns and take corrective actions. This dynamic approach not only mitigates potential risks but also fosters a culture of transparency and accountability within the business. Ethical considerations should be integrated into all stages of AI implementation, from design to deployment. Continuous monitoring allows organizations to adapt to changing ethical landscapes, emerging risks, and evolving regulations. ... Emphasizing ethical practices in business is not just a moral obligation but a strategic imperative for long-term success. In today’s interconnected world, consumers are becoming increasingly conscious of the companies they support, favouring those that demonstrate a commitment to ethical values. By prioritizing ethics in decision-making processes and embracing transparency, businesses can build trust with their stakeholders and create sustainable relationships that drive growth.


The 6 Traits You Need To Succeed in the AI-Accelerated Workplace

AI copilots can provide valuable support, but humans need to exercise critical thinking skills to interpret data, make decisions and solve complex problems effectively. For any area of study, there are various levels of understanding. The very basic is "You don't know what you don't know," then comes "You know what you don't know," next up is "You have the knowledge necessary to interact," and the final level is "You are the subject matter expert."  ... Modern-day knowledge workers need to adapt to new technologies and workflows quickly. As a great horse rider becomes one with the animal, their movements are synchronized naturally. In the same sense, modern-day knowledge workers need to become one with AI assistants/bots and synchronize and adapt their style and pace of work with all the latest tools and technologies being introduced. ... Resilience is the most important quality amid this mist of future job landscapes. It is the best quality an employee can have. It will equip one with the mental fortitude to embrace innovation, learn new skills, and confidently navigate unfamiliar territories.


Speaking Cyber-Truth: The CISO’s Critical Role in Influencing Reluctant Leadership

It’s not just about pointing out the problems, the CISO must also be a problem-solver. They must work collaboratively with other leaders to find ways to enable the business while protecting it — providing insights and recommendations that allow others to make informed decisions based on the company’s risk appetite and strategic direction. But the effectiveness of a CISO is not just measured by the absence of breaches; it’s their ability to enable the business to take calculated risks confidently. The CISO must work to ensure that cyber security is built into the DNA of every project. They must advocate and champion secure-by-design principles to ensure that security is not an afterthought but a fundamental component of every initiative. By forcing organizations to acknowledge and address cyber risks proactively, CISOs not only protect the enterprise but also contribute to its resilience and long-term success. CISOs also face the issue of risk prioritization. In an ideal world, every vulnerability would be patched, every threat neutralized, every alert investigated. However, resources are constrained, investments are finite, and not all risks are created equal. 


4 Lessons We Learned From The Change Healthcare Cyberattack

Given the massive scale of the Change Healthcare attack, it goes without saying that the aftermath has been chaotic. Providers and pharmacies were forced to expend time and resources on manual claims processing, and many continue to face payment delays that are hurting their cash flow. Change Healthcare’s parent company, insurance giant UnitedHealth Group, has faced widespread criticism for its handling of the attack. The American Hospital Association has been one of the biggest voices in this regard. In the organization’s March 13 letter to the Senate Finance Committee, the AHA wrote that UnitedHealth has done nothing to materially address “the chronic cash flow implications and uncertainty that our nation’s hospitals and physicians are experiencing” as a result of the attack. The long recovery time indicates a potentially poor business continuity plan (BCP), Kellerman noted. In his eyes, every healthcare organization needs a BCP in case of a potential cybersecurity event. “[The plan] should address business continuity in case of crisis or disaster, including backups and the ability to restore them in a timely manner. 


Is HR ready for generative AI? New data says there's a lot of work to do

The potential risks for AI in HR are rooted in a lack of trust and potential bias in AI delivering recommendations or suggestions based on models that may have been unintentionally trained on datasets that reinforce biases. Core HR functions could also be impacted by data compromises, AI hallucinations, bias, and toxicity. The common theme across all these areas of potential risk is the human steps that can mitigate them. AI adoption in HR is on the rise. Valoir research found that 50% of organizations are either currently using or planning to apply AI to recruiting challenges in the next 24 months, followed closely by talent management and training and development. ... Valoir recommends that HR leaders not only select vendors and technologies that can be trusted, but put in place the appropriate policies, procedures, safeguards, and training for both HR staff and the broader employee population. HR departments will need to consider how they communicate those policies and training to both their internal HR teams and the broader population.


The Complexity Cycle: Infrastructure Sprawl is a Killer

From imperative APIs that required hundreds of individual API calls to configure a system to today’s declarative APIs that use only one API call. It’s easier, of course, but only the interface changed. The hundreds of calls mapped to individual configuration settings still need to be made, you just don’t have do it yourself. The complexity was abstracted away from you and placed firmly on the system and its developers to deal with. Now, that sounds great, I’m sure, until something goes wrong. And wrong something will go; there’s no avoiding that either. Zero Trust has an “assume breach” principle, and Zero Touch infrastructure (which is where the industry is headed) ought to have a similar principle, “assume failure.” It’s not that complexity evolves. Complexity comes from too many tools, consoles, vendors, environments, architectures, and APIs. As an enterprise evolves, it adds more of these things until complexity overwhelms everyone and some type of abstraction is put in place. We see that abstraction in the rise of multicloud networking to address the complex web of multiple clouds and microservices networking, which is trying to unravel the mess inside of microservices architectures.


Solar Spider Spins Up New Malware to Entrap Saudi Arabian Financial Firms

JSOutProx is well known in the financial industry. Visa, for example, documented campaigns using the attack tool in 2023, including one pointed at several banks in the Asia-Pacific region, the company stated in its Biannual Threats Report published in December. The remote access Trojan (RAT) is a "highly obfuscated JavaScript backdoor, which has modular plugin capabilities, can run shell commands, download, upload, and execute files, manipulate the file system, establish persistence, take screenshots, and manipulate keyboard and mouse events," Visa stated in its report. "These unique features allow the malware to evade detection by security systems and obtain a variety of sensitive payment and financial information from targeted financial institutions. JSOutProx typically appears as a PDF file of a financial document in a zip archive. But really, it's JavaScript that executes when a victim opens the file. The first stage of the attack collects information on the system and communicates with command-and-control servers obfuscated via dynamic DNS. 


Biggest AI myths in customer experience

Recent months have seen numerous examples of chatbots going rogue and tarnishing the reputation of the organisations that implemented them. From incorrect refund policies costing a Canadian airline hundreds of dollars to a parcel delivery firm swearing at customers, GenAI is not ready to take off the training wheels just yet. Large language models (LLMs) such as ChatGPT are subject to hallucinations which, without safeguards, could negatively impact the customer experience. Customers would quickly lose patience with brands if they were misled during interactions. A tool that should vastly improve first-contact resolution could achieve the opposite, with customers needing further support to correct previous mistakes. That said, it is possible to reduce the likelihood of egregious chatbot errors through appropriate optimisation techniques.  ... The implementation of AI in CX should be a gradual process. If phase one of AI development was to streamline communications before, during and after interactions, future phases should focus on expanding the scope of the contact centre, encompassing more traditionally back-office and professional roles and creating a hub for communications, relationship building and data orchestration.



Quote for the day:

"To have long-term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley