Daily Tech Digest - February 23, 2024

When cloud AI lands you in court

In a recent legal ruling against Air Canada in a small claims court, the airline lost because its AI-powered chatbot provided incorrect information about bereavement fares. The chatbot suggested that the passenger could retroactively apply for bereavement fares, despite the airline’s bereavement fares policy contradicting this information. ... In the Air Canada case, the tribunal called it a case of “negligent misrepresentation,” meaning that the airline had failed to take reasonable care to ensure the accuracy of its chatbot. The ruling has significant implications, raising questions about company liability for the performance of AI-powered systems, which, in case you live under a rock, are coming fast and furious. Also, this incident highlights the vulnerability of AI tools to inaccuracies. This is most often caused by the ingestion of training data that has erroneous or biased information. This can lead to adverse outcomes for customers, who are pretty good at spotting these issues and letting the company know. The case highlights the need for companies to reconsider the extent of AI’s capabilities and their potential legal and financial exposure to misinformation, which will cause bad decisions and outcomes from the AI systems.


Rackspace’s MD on addressing the shortage of senior, mid-level cybersecurity talent

The Data Security Council of India (DSCI) predicts that local demand for cybersecurity professionals will reach a million positions in 2025 if the cybersecurity ecosystem continues its rapid growth. While both the government and private enterprises are taking steps to increase the number of individuals pursuing careers in cybersecurity, its impact will not be felt immediately, especially at the higher levels. As experienced professionals retire or move into more advanced roles, the industry may face a shortage of individuals with the necessary expertise and experience to fill their positions. While the increase in new graduates entering the field can fill up entry-level roles, it will take more time for them to gain the necessary experience and qualifications for senior and mid-level cybersecurity positions. Organisations will need to be innovative and creative in ensuring their cybersecurity posture in the face of a talent crunch. They will need to utilise and refine their strategies for attracting and retaining top talent, as well as upskilling existing employees, by leveraging the latest technological trends for more efficient cybersecurity practices. 


What are the main challenges CISOs are facing in the Middle East?

The skills challenge is likely going to be key as a result of the rise of disruptive technologies such as Generative AI. They will be a reshaping of the entire global workforce and skills to adequately deal with cybersecurity issues will be in short supply. The other critical challenge that will be faced has to do with regulatory changes as nation-states seek to protect their citizens from cyberattacks. This typically adds to the overall costs of cyber compliance. Lastly, cybercrime will also rise especially on digital platforms as people transact virtually. Cybersecurity Ventures expects damage costs from cybercrime to increase by about 15% each year over the next 3 years. ... The human resource base is very key both for cybersecurity professionals and the general employee. In cybersecurity, precedence is always provided for the protection of human life before anything else. It is therefore important to ensure that people are equipped with adequate and relevant knowledge about how to identify indicators of attacks and remain alert for such attacks ... The financial services sector also relies on proprietary technology hence any cyber-attacks on such could lead to huge losses and reputational damage. The sector also holds customer data and intellectual property which is typically very sensitive information and held on trust.


Practical steps on carbon accounting for data centers

Measuring the carbon and material cost of our equipment is done through lifecycle assessment (LCA). This is done by disassembling products, looking at the material content, and giving each part of this an environmental weight. This is based on where and how they were sourced and what impacts these processes have. Measuring impact using the LCA method involves drawing boundaries, making assumptions, and using estimates. These estimates are shared on platforms like EcoInvent, which give specialists shortcuts on materials and good ideas on how to fill gaps. When you read reports from manufacturers, they will state where they assume the product was delivered, where it was assembled, how long it was in use, where the materials were mined, and potentially how and where it was destroyed. They need to do this because different locations will have slightly different sets of environmental risks. There are a lot of variables in play. Because of this, there is wide variance between LCAs from different manufacturers of very similar products.


Incorporating AI and automation into cyber risk management

AI-powered systems can significantly enhance organisational cyber defence capabilities through advanced threat detection, predictive analytics, and real-time monitoring. Next-generation AI-driven tools enable organisations to establish intelligent, secure, and automated systems capable of real-time threat detection, prevention, and prediction. AI models can be trained to identify anomalies in system behaviour, serving as an effective means of detecting potential cyber risks. This capability proves invaluable in recognizing potential security breaches or operational failures. Moreover, AI-powered threat intelligence contributes to identifying emerging threats, facilitating the development of proactive mitigation strategies. Ensuring compliance with IT regulations, such as the General Data Protection Regulation (GDPR) and Payment Card Industry Data Security Standard (PCI DSS), is achieved through the continuous monitoring capabilities of AI tools. These tools not only streamline compliance efforts but also enhance accuracy and efficiency. 


Adapting To Software Testing's Future: Success Factors

Risk-based testing is a strategic approach that prioritizes testing efforts based on the potential risk of failure and its impact on the project or business. By identifying the most critical areas of the application in terms of functionality, user impact, and likelihood of failure, teams can allocate their limited testing resources more effectively. ... Test selection techniques, such as test case prioritization and minimization, help teams focus on the tests that are most likely to detect defects. Prioritization involves ordering test cases so that those with the highest importance or likelihood of finding bugs are executed first. Minimization seeks to reduce the number of test cases to a necessary subset, eliminating redundancies without sacrificing coverage. ... By automating repetitive and time-consuming tests, teams can significantly reduce the time required for test execution. Automation is particularly effective for regression testing, where the same tests need to be run repeatedly against successive versions of the software. Automated tests can be executed faster and more frequently than manual tests, providing quicker feedback and freeing up human testers to focus on more complex and exploratory testing tasks.


5 Tips for Developer-Friendly DevSecOps

Many security tools are built for security professionals, so simply bolting them onto existing developer workflows can create friction. When looking to integrate a new tool into the SDLC, consider extracting the desired data from the security tool and natively integrating it into the developer’s workflow — or even better, look to a tool that’s already embedded within the flow. This reduces context switching, and helps developers detect and remediate vulnerabilities earlier. Additionally, leveraging AI tools within integrated development environments (IDEs) streamlines the process further, allowing developers to address security alerts without leaving their coding environment. ... A barrage of alerts, especially false positives, can erode a developer’s trust in the tool and compromise their productivity. A well-integrated security tool should have an alert system that surfaces high-priority alerts directly to developers — for example, alert settings based on custom and automated triage rules, filterable code scanning alerts and the ability to dismiss alerts contribute to a more effective alert system. This ensures developers can swiftly address urgent security concerns without being overwhelmed by unnecessary noise, and helps to ultimately clean up an organization’s security debt.


Leveraging automation for enhanced cyber security operations

A practical approach to refining automation logic involves leveraging experiences from cyber exercises, penetration tests or red teaming. Analyzing the defensive strategies of the “blue team” during various attack scenarios helps identify their response algorithms and steps. This process starts with differentiating between true and false positive alerts, identifying hacker attributes and evaluating compromised resources. Such insights enable the automation of defenses by validating logged events, ensuring a more effective and streamlined response to modern cyber threats. The first step in enhancing incident response is to automate the collection of contextual data that informs decision-making. This includes information about the particular machine or another asset involved in the security incident, user account details and intelligence on external threat elements like domain names. This foundational data is important for understanding the scope and impact of security incidents, enabling quicker and more effective responses. If an attack still evolves, the context gathered initially assists in correlating future defensive measures with a pre-established hypothesis regarding the attack’s propagation.


Innovation in IT: A Blueprint for Digital Evolution

Success requires a methodical approach. Digital Business Methodology (DBM) provides insight into the "What" that shapes your approach, with the "How" contingent on tools, ecosystem, leadership support, and team skill set. DBM is a comprehensive strategy that empowers companies to embrace and implement digital business practices. It provides a well-defined path orchestrating data, technology, and personnel alignment. This approach yields results across the enterprise, emphasizing speed, consistency, and scalability through an outcome-driven, incremental process. This methodology's core is a business-led, agile digital culture focused on achieving bite-sized outcomes essential for accelerating business growth. Under the DBM umbrella, businesses lead in collaboration with key stakeholders throughout the entire process, from ideation to deployment. The primary focus lies in simplifying end-to-end workflows and establishing a single source of truth (SSOT). This guided and adaptable ideation-to-deployment ecosystem facilitates seamless collaboration among business owners, engineers, analysts, scientists, and operational teams, driving innovative solutions and achieving desired outcomes.


The Psychology of Cybersecurity Burnout

The cybersecurity landscape is incredibly complex, and the cybersecurity procedures implemented by a given organization are likely to vary significantly. However, a number of factors have emerged as being likely contributors to this mental health phenomenon. ... Anticipating developing threats is a further problem. Staff simply don’t have time to stay on top of the news and devise procedures that can deal with novel ransomware attacks or whatever else may be brewing in the attack space. “If I don’t get on top of this, it’s gonna be a problem for me and my team,” Gartland says. “So, we’re just trying to figure out: How do I learn something on the weekend or late at night?” Cybersecurity professionals must be highly attentive to their work and conspicuous failures can often be traced to a single error, increasing the burden of responsibility on even low-level employees. The vigilance required of the job is equivalent to that required of air traffic controllers and medical professionals. People who strongly identify with those responsibilities are more likely to suffer burnout due to intense internal motivation to fulfill them even when it is not realistic.



Quote for the day:

"Go as far as you can see; when you get there, you'll be able to see farther." -- J. P. Morgan

Daily Tech Digest - February 22, 2024

New Wave of 'Anatsa' Banking Trojans Targets Android Users in Europe

"Initially the [cleaner] app appeared harmless, with no malicious code and its AccessibilityService not engaging in any harmful activities," ThreatFabric said. "However, a week after its release, an update introduced malicious code. This update altered the AccessibilityService functionality, enabling it to execute malicious actions such as automatically clicking buttons once it received a configuration from the C2 server," the vendor noted. The files that the dropper dynamically retrieved from the C2 server included configuration info for a malicious DEX file for distributing Android application code; a DEX file itself with malicious code for payload installation, configuration with a payload URL, and finally code for downloading and installing Anatsa on the device. The multi-stage, dynamically loaded approach used by the threat actors allowed each of the droppers that they used in the latest campaign to circumvent the tougher AccessibilityService restrictions Google implemented in Android 13, Threat Fabric said. For the latest campaign, the operator of Anatsa chose to use a total of five droppers disguised as free device-cleaner apps, PDF viewers, and PDF reader apps on Google Play.


CIO Gray Nester on fostering a culture of success

It’s easy to be courageous when you’ve already achieved more than you ever thought you would. I don’t have to be afraid to fail because I’m successful in the things that matter — my family. That’s where my love comes from. As a leader, courage and always doing what’s right equate to being honest but also being kind. There’s a difference between being honest and being truthful. As I have the opportunity to coach people, I have to deliver hard messages, and those are honest messages. I can be truthful with you and never address the opportunity to improve. So, I think courage is the willingness to say things that may not be popular but that help you achieve the goals and objectives you’re capable of achieving. We all show up here every day for something bigger than ourselves. If you believe in assuming positive intent and believe that people show up every day to be successful, then if you can give them the tough message, you have to believe they’re going to take that and do something with it because feedback is a gift. That doesn’t mean that everybody will be successful in that, but it’s our responsibility as leaders to go out and do that. That may mean saying, ‘Hey, Business, you’ve got a really bad idea, and this isn’t going to work, and let me tell you why.’ 


Navigating the Data Revolution: Exploring the Booming Trends in Data Science and Machine Learning

A significant trend in data science and machine learning revolves around incorporating artificial intelligence (AI) to drive automation. Industries across the spectrum are harnessing the potential of machine learning algorithms to streamline everyday tasks, fine-tune processes, and boost efficiency. Whether in manufacturing, healthcare, finance, or logistics, the wave of AI-powered automation is fundamentally transforming the operational landscape of businesses. ... Natural Language Processing (NLP) has taken center stage in the expansive realm of machine learning. Thanks to strides in deep learning models such as GPT-3, machines are rapidly evolving, displaying a remarkable proficiency in deciphering and generating language that mimics human expression. This transformative trend is reshaping how we engage with technology, from the intuitive responses of chatbots and virtual assistants to the seamless intricacies of language translation and content creation. ... The widespread adoption of Internet of Things (IoT) devices has triggered a notable upswing in data generation right at the edge of networks. A trend gaining significant traction is the fusion of edge computing with decentralized machine learning geared towards processing data near its source.


The Impact of Technical Ignorance

As most non-technical folks appear unable or unwilling to accept that software is hard, our responsibility – for better or worse – is to show and explain. Unique situations require adjusting the story told, but it is necessary – and never-ending – to have any chance to get the organization to understand: explaining how software is developed and deployed, demonstrating how a data-driven organization requires quality data to make correct decisions, explaining the advantages and disadvantages of leveraging open source solutions; showing examples of how open source licenses impact your organization’s intellectual property. Look for opportunities to inject background and substance when appropriate, as education is open-ended and never-ending. ... Aside from those employed in purely research and development roles, engineering/technology for engineering/technology's sake is not feasible, as technology concerns must be balanced with business concerns: product and its competitors, sales pipeline, customer support and feature requests, security, privacy, compliance, etc. 


Kubernetes Predictions Were Wrong

The view that Kubernetes would settle into quiet utility and effectively disappear while also running all our workloads failed to materialize. Nobody managed to create a single opinionated path for Kubernetes that would take care of all these choices. The simple reason for this is that the mythical one true way wouldn’t work for most applications and services. It’s impossible to create a simple, simple path without acknowledging the context of the application and organization. This is why platform engineering has gained traction. While there’s little chance of creating an industrywide path of simplified choices, creating one within an organization is perfectly feasible. A minimal viable platform could be a wiki page listing pre-baked decisions and providing a standard example for each configuration file. This might evolve into a facade that allows developers to specify what they need along a simple dimension, such as “size,” with the platform taking care of the details behind the flag. Platforms should provide simplified ways to do the right thing while letting expert developers peel back the layers when the standard approach isn’t suitable.


How DSPM Fits into Your Cloud Security Stack

DSPM solutions provide unique security capabilities and are specifically tailored to addressing sensitive data in the cloud, but also to supporting a holistic cloud security stack. As the variety and sophistication of attacks increase over time, new challenges arise that the existing security stack can hardly keep up with. A new, more aligned, and holistic inventory of security tools should be considered, consisting of identity threat protection, data-related risk reduction, privacy management, and a host of other imperative elements while ensuring continuous monitoring of any cloud asset, including CSPs, SaaS apps, File Shares, and DBaaS. However, building the most appropriate cloud security stack to do so may prove challenging in light of the numerous different – but similar-sounding – security domains in the market. DSPM tools protect data wherever it resides (IaaS, PaaS, SaaS, DBaaS, and File Shares), combined with advanced identity-centric data threat protection. They empower security teams to reduce data risk and achieve unparalleled visibility into data location, misconfiguration, comprehensive and tailored classification, access permissions, usage patterns, and potential threats, ensuring continuous data security and governance. 


Face off: Attackers are stealing biometrics to access victims’ bank accounts

Cybersecurity company Group-IB has discovered the first banking trojan that steals people’s faces. Unsuspecting users are tricked into giving up personal IDs and phone numbers and are prompted to perform face scans. These images are then swapped out with AI-generated deepfakes that can easily bypass security checkpoints The method — developed by a Chinese-based hacking family — is believed to have been used in Vietnam earlier this month, when attackers lured a victim into a malicious app, tricked them into face scanning, then withdrew the equivalent of $40,000 from their bank account. ... “These tools are relatively low cost, easily accessed and can be used to create highly convincing synthesized media such as face swaps or other forms of deepfakes that can easily fool the human eye as well as less advanced biometric solutions,” he said. ... “Organizations may begin to question the reliability of identity verification and authentication solutions, as they will not be able to tell whether the face of the person being verified is a live person or a deepfake,” writes Gartner VP analyst Akif Khan. 


Critical infrastructure attacks aren’t all the same: Why it matters to CISOs

Effectively restraining foreign adversaries would require limiting connectivity to critical infrastructure, which is only incrementally possible (via air-gapping, etc.). Better awareness of malign intentions, however, should dampen the sophistication of intrusion activity, and institutionalization of critical infrastructure preparedness and mitigation fundamentals should mitigate threat severity. From this perspective, Wray’s push to spread awareness of the PRC threat is wise, as is Canada’s attempt to pass stricter regulation of critical infrastructure operators’ security practices. One limits the discretionary conditions the Chinese need to build this capability; the other builds toward an inter-institutional apparatus that is more inherently adaptive, which should reduce the value of the capability. Stakeholders in the United States and elsewhere should double-down on efforts that conform to these parameters. From more consistent de-classification of details of critical infrastructure attacks to the publicization of critical infrastructure operator security performance outcomes, public sector stakeholders can limit the conditions under which foreign activity can find strategic value.


Report: Manufacturing bears the brunt of industrial ransomware

One of the main reasons that the manufacturing sector is so heavily targeted is because it adopted digitization at a much quicker pace compared to, for example, the water and wastewater or transportation sectors. But Lee was quick to point out that other industrial sectors are catching up to the broad digital footprint – and potential access points – of the manufacturing sector. “The manufacturing industry really went through that quote unquote, digital transformation and connectivity very quickly. As a result of not investing in IoT security when they did that, we’re seeing a lot of ransomware cases, a lot of activists, criminals, etc., disrupting manufacturing,” Lee said. “Far more than gets reported publicly.” The manufacturing sector, Lee said, still struggles with segmenting networks like those that deal with human resources from operational technology networks that control operations, which can allow a hacker broad access to the organization. However, that trend is spreading to other sectors, such as water and wastewater, Lee warned. He expects an increase of ransomware attacks on water and other utilities as digitization becomes more common.


4 Steps to Achieving Operational Flow and Improving Quality in Tech Teams

Removing dependencies is often a lot of work. Dependencies are often the result of specialist knowledge that resides in another part of the organisation, or past architectural choices. It often feels like the dependencies are inevitable and inescapable. There’s a lot of truth to the idea that removing dependencies will be painful and time-consuming, but they only have to be removed once, at which point the team never has to deal with that dependency again. It’s an investment today in order to get better results tomorrow. ... Rather than arranging teams in functional silos, arrange them so they can deliver value independently. This arrangement then allows more work to move through the system simultaneously, because the different work doesn’t create delays for other teams. Each of the above contributes to improving flow. But what about improving quality? The interesting thing is that each of the steps above improves quality, too. By doing fewer things at once, the reduced cognitive load will make it easier for the team to produce higher quality work, while reduced context switching makes it less likely they’ll miss something important. 



Quote for the day:

''To do great things is difficult; but to command great things is more difficult.'' -- Friedrich Nietzsche

Daily Tech Digest - February 21, 2024

The Top 5 Kubernetes Security Mistakes You’re Probably Making

Kubernetes configurations are primarily defined using YAML files, which are human-readable data serialization standards. However, the simplicity of YAML is deceptive, as small errors can lead to significant security vulnerabilities. One common mistake is improper indentation or formatting, which can cause the configuration to be applied incorrectly or not at all. ... The ransomware attack on the Toronto Public Library revealed the critical importance of network microsegmentation in Kubernetes environments. By limiting network access to necessary resources only, microsegmentation is pivotal in preventing the spread of attacks and safeguarding sensitive data. ... eBPF is the basis for creating a universal “security blanket” across Kubernetes clusters, and is applicable on premises, in the public cloud and at the edge. Its integration at the kernel level allows for immediate detection of monitoring gaps and seamless application of security measures to new and changing clusters. eBPF can automatically apply predefined security policies and monitoring protocols to any new cluster within the environment.


Error-correction breakthroughs bring quantum computing a step closer

The best strategy, says Sam Lucero, chief quantum analyst at Omdia, would be to combine multiple approaches to get the error rates down even further. ... The bigger question is which type of qubit is going to become the standard – if any. “Different types of qubits might be better for different types of computations,” he says. This is where early testing can come in. High-performance computing centers can already buy quantum computers, and anyone with a cloud account can access one online. Using quantum computers via a cloud connection is much cheaper and quicker. Plus, it gives enterprises more flexibility, says Lucero. “You can sign on and say, ‘I want to use IonQ’s trapped ions. And, for my next project, I want to use Regetti, and for this other project, I want to use another computer.’” But stand-alone quantum computers aren’t necessarily the best path forward for the long term, he adds. “If you’ve got a high-performance computing capability, it will have GPUs for one type of computing, quantum processing units for another type of computing, CPUs for another type of computing – and it’s going to be transparent to the end user,” he says. “The system will automatically parcel it out to the appropriate type of processor.”


Is hybrid encryption the answer to post-quantum security?

One of the biggest debates is how much security hybridization offers. Much depends on the details and the algorithm designers can take any number of approaches with different benefits. There are several models for hybridization and not all the details have been finalized. Encrypting the data first with one algorithm and then with a second combines the strength of both, essentially putting a digital safe inside a digital safe. Any attacker would need to break both algorithms. However, the combinations don’t always deliver in the same way. For example, hash functions are designed to make it hard to identify collisions, that is two different inputs that produce the same output: (x_1 and x_2, such that h(x_1)=h(x_2)). If the input of the first hash function is fed into a second different hash function (say g(h(x))), it may not get any harder to find a collision, at least if the weakness lies in the first function. If two inputs to the first hash function produce the same output, then that same output will be fed into the second hash function to generate a collision for the hybrid system: (g(h(x_1))= g(h(x_2)) if h(x_1)=h(x_2)). Digital signatures are also combined differently than encryption. One of the simplest approaches is to just calculate multiple signatures independently from each other. 


By elevating partners’ service capabilities, we ensure they offer a comprehensive cybersecurity solution to enterprises in today’s dynamic threat landscape

The MSSPs have a significant opportunity for growth, with an increasing number of partners showing interest in this domain. What’s notable is that our focus isn’t solely on partners delivering network security solutions but also extends to other offerings. For instance, our SIEM solutions now feature a consumption-based model, attracting more partners to explore the realm of MSSP partnerships. This trend has already gained momentum over the past year, indicating a promising trajectory for the future. As the market continues to expand, catering to a diverse range of customers across various sizes and sectors, the demand for managed security services will only intensify. Here, our integrator partners play a crucial role, positioned to capitalise on the growing requirements of clients. Moreover, selected MSSP partners have the opportunity to develop specialised services around Fortinet solutions, leveraging programs like FortiDirect, FortiEDR, FortiWeb, and FortiMail. Our offerings, such as the MSSP Monitor program and Flex VM program, provide flexible consumption models tailored to the evolving needs of MSP partners. 


Early adopters’ fast-tracking gen AI into production, according to new report

One in four organizations say gen AI is critically important to gaining increased productivity and efficiency. Thirty percent say improving customer experience and personalization is their highest priority, and 26% say it’s the technology’s potential to improve decision-making that matters most. ... “The generative AI phenomenon has captured the attention of the market—and the world—with both positive and negative connotations,” said Howard Dresner, founder, and chief research officer at Dresner Advisory. “While generative AI adoption remains nascent in the near term, a strong majority of respondents indicate intentions to adopt it early or in the future.” ... Nearly half of organizations consider data privacy to be a critical concern in their decision to adopt gen AI. Legal and regulatory compliance, the potential for unintended consequences, and ethics and bias concerns are also significant. Less than half of respondents—46% and 43%, respectively—consider costs and organizational policy important to generative AI adoption. Weaponized LLMs and attacks on chatbots fuel fears over data privacy. More organizations are fighting back and using gen AI to protect against chatbot leaks.


AI and data centers - Why AI is so resource hungry

Is it the data set, i.e. volume of data? The number of parameters used? The transformer model? The encoding, decoding, and fine-tuning? The processing time? The answer is of course a combination of all of the above. It is often said that GenAI Large Language Models (LLMs) and Natural Language Processing (NLP) require large amounts of training data. However, measured in terms of traditional data storage, this is not actually the case. ... It is thought that ChatGPT-3 was trained on 45 Terabytes of Commoncrawl plaintext, filtered down to 570GB of text data. It is hosted on AWS for free as its contribution to Open Source AI data. But storage volumes, the billions of web pages or data tokens that are scraped from the Web, Wikipedia, and elsewhere then encoded, decoded, and fine-tuned to train ChatGPT and other models, should have no major impact on a data center. Similarly, the terabytes or petabytes of data needed to train a text-to-speech, text to image or text-to-video model should put no extraordinary strain on the power and cooling systems in a data center built for hosting IT equipment storing and processing hundreds or thousands of petabytes of data.


Making cloud infrastructure programmable for developers

Just as software-oriented architecture (SOA) evolved application architecture from monolithic applications into microservices patterns, IaC has been the slow-burn movement that is challenging what the base building blocks should be for how we think of cloud infrastructure. IaC really got on the map in the 2010s, when Puppet, Chef, and Ansible introduced IaC methods for the configuration of virtual machines. Chef was well-loved for allowing developers to use programming languages like Ruby and for the reuse and sharing that came with being able to use the conventions of a familiar language. During the next decade, the IaC movement entered a new era as the public cloud provider platforms matured, and Kubernetes became the de facto cloud operating model. HashiCorp’s Terraform became the IaC poster child, introducing new abstractions for the configuration of cloud resources and bringing a domain-specific language (DSL) called HashiCorp Configuration Language (HCL) designed to spare developers from lower-level cloud infrastructure plumbing.


A cloud-ready infra: Fundamental shift in how new-age businesses deliver value to customers

Cloud computing has emerged as a robust and secure platform for data storage, offering unparalleled protection against extreme conditions and disasters. Today’s cloud-based providers offer robust security and disaster recovery capabilities, ensuring the safety and integrity of critical data assets. ... This includes empowering doctors and nurses to access patient records securely on their own devices and facilitating remote consultations through virtual desktop infrastructure (VDI). This instant access has transformed the way healthcare professionals interact with patient data, allowing doctors to review charts on tablets during rounds and nurses to retrieve medication histories from any workstation. By storing data on secure servers rather than end-client devices, cloud-based solutions guarantee the protection of critical medical records in the event of theft or compromise of an end device. ... This approach not only ensures data security but also meets the stringent requirements of healthcare institutions while allowing for scalable systems connected to the hospital’s network.


The Paradox of Productivity: How AI and Analytics are Shaping the Future of Work-Life Balance

One of the key challenges we face is managing time effectively in an environment where the line between ‘on’ and ‘off’ hours is increasingly fuzzy. AI-powered tools and analytics can generate insights and tasks round-the-clock, leading to an ‘always-on’ work culture. This can encroach upon personal time, making it challenging to disconnect and potentially causing stress and burnout. Maintaining mental health in this context is paramount. It is incumbent upon companies to ensure that the implementation of AI and analytics tools does not exacerbate workplace stress. Instead, these tools should be leveraged to promote a healthier work-life balance by automating routine tasks, predicting workload peaks, and enabling flexible working arrangements. Achieving personal fulfillment in the age of AI also means embracing lifelong learning. As the nature of work evolves, so too must our skillsets. Upskilling and reskilling become not just a means to professional advancement but also an opportunity for personal growth and satisfaction. Analytics can play a role here in identifying skill gaps and learning opportunities that align with individual career paths and interests.


The importance of a good API security strategy

Hackers love exploiting APIs for many reasons, but mostly because they let them bypass security controls and access sensitive company and customer data easily, as well as certain functionalities. A recent incident involving a publicly exposed API of social media platform Spoutible could have ended in attackers stealing users’ 2FA secrets, encrypted password reset tokens, and more. This type of incident can result in a loss of customer and business partners’ trust, consequently leading to financial loss and a drop in brand value. Poor API security practices can also have regulatory and legal consequences, cause disruption to company operations and even result in intellectual property theft. ... A good API security strategy is essential for every organization that wants to keep its digital assets safe and protect sensitive customer data. OWASP constantly updates its list of the top 10 API security threats. While security practitioners mustn’t rely solely on this data, the list is still an essential tool when planning a security strategy that will hold up. Adhering to the NIST Cybersecurity Framework is also an essential step in planning a good API security strategy. 



Quote for the day:

"The signs of outstanding leadership are found among the followers." -- Max DePree

Daily Tech Digest - February 20, 2024

How generative AI will benefit physical industries

To make generative AI’s potential a reality for a physical business, two crucial elements come into play: people and data. Investing in a highly skilled team is a given precondition for success with any business. Also critical is having a diversity of expertise, as well as a diversity of experiences, cultural touch points, and background. Drawing on this expertise and experience to inform how generative AI is developed allows more context to be built-in, and the models can be expanded to serve a global audience versus a regional or national one. Data quality in both edge computing and generative AI models is crucial. This is what has driven Motive to invest in a truly world-class annotations team. Because accuracy is so critical for the safety and optimization of our customers, this team ensures that the processes behind our use of generative AI are strong and consistent. These processes include ensuring the highest quality data and labels to train our models, and thus our products and services. At the same time, generative AI in the physical economy will only be as useful as the insights and capabilities it creates.


Do you need a larger project team?

There is plenty of anecdotal evidence in the industry where GCs have taken on data center projects in EU regions and have not fully understood the local resourcing requirements and supply chain logistics. In addition, they have incorrectly assumed that a UK labor force will be as effective as normal, when they are on rotational-based attendance in a regional project office. Instead, the solution may lie in developing smaller, fully supported, highly competent, highly motivated, and well-compensated teams capable of delivering increased outputs to realize your competitive potential – a theme also adopted by the World Quality Week in 2023. To meet the strong imperative for quick time-to-market in the industry within the context of an acute skills shortage, we argue that the solution lies in focusing on training people and empowering them with the capabilities of AI. Streamlined, lean teams with mature AI tools have a better chance of efficiently delivering on larger projects. Investment in training is crucial across the industry, particularly innovative approaches that enable smaller teams to achieve more thanks to AI assistance and other technological advancements.


Data Observability in the Cloud: Three Things to Look for in Holistic Data Platform Governance

To be truly meaningful in addressing the pain associated with data and AI pipelines, data observability tools must expand into FinOps. It’s no longer enough to know where a pipeline stalls or breaks -- data teams need to know how much the pipelines cost. In the cloud, inefficient performance drives up computing costs, which in turn drives up total costs. Tools must encompass FinOps to provide observability into costs pertaining to both infrastructure and computing resources, broken down by job, user, and project. They must also include advanced analytics to provide guidance on how to make individual pipelines cost-efficient. This will free up data teams to focus on strategic decision-making rather than spending their time reconfiguring pipelines for cost. ... To meet these demands, data observability solution vendors must offer custom products that allow customers to see on a platform-specific level such things as detailed cost visibility, efficient management of storage costs, chargeback/showback, and where the expensive projects, queries, and users lie.


Fundamentals of Functions and Relations for Software Quality Engineering

Effective testing is not just about covering every line of code. It's about understanding the underlying relationships. How do we effectively test the complex relationships in our software code? Understanding functions and relations proves an invaluable asset in this endeavor. ... It's worth noting that while all programs can be viewed as functions in a broad sense, not all are "pure" functions. Pure functions have no side effects, meaning they solely rely on their inputs to produce outputs without altering any external state. In contrast, many practical programs involve side effects, complicating their pure function interpretation. ... While functions provide clear input-output connections, not all relationships in software are so straightforward. Imagine tracking dependencies between tasks in a project management tool. Here, multiple tasks might relate to each other, forming a more complex network. ... Relations can sometimes group elements into equivalence classes, where elements within a class behave similarly. Testers can leverage this by testing one element in each class, assuming similar behavior for others, saving time and resources.


Your AI Girlfriend Is Cheating On You, Warns Mozilla

Mozilla said it could find only one chatbot that met its minimum security standards, with a worrying lack of transparency over how the intensely personal information that might be shared in such apps is protected. Almost two thirds of the apps didn’t reveal whether the data they collect is encrypted. Just under half of them permitted the use of weak passwords, with some even accepting a password as flimsy as “1”. More than half of the apps tested also failed to let users delete their personal data. One even claimed that “communication via the chatbot belongs to the software.” Mozilla also found the use of trackers—tiny pieces of code that gather information about your device and what you do on it— was widespread among the romantic chatbots. ... The main tip is not to say anything to the chatbot that you wouldn’t want friends or colleagues to discover, as the privacy of these services cannot be guaranteed. Also use a strong password, request that personal data is deleted once you’ve finished using the chatbot, opt out of having your data used to train AI models and don’t accept phone permissions that give the chatbot access to your location, camera, microphone or files on your device.


A Balanced Look at the Potential and Challenges of Popular LLMs

A beautiful symphony requires more than just individual talent. Ethical considerations like potential biases and misinformation risks demand attention. We must ensure responsible development, ensuring these LLMs don’t become instruments of discord but rather powerful tools for good. The potential for collaboration is even more exciting. Imagine Bard fact-checking Claude’s poems, or Qwen providing real-time data for GPT-3.5-Turbo-0613’s code generation. Such collaborations could lead to groundbreaking innovations, a true ensemble performance exceeding the capabilities of any single LLM. This is just the opening act of a much grander performance. As the music evolves, LLMs hold immense potential. Advancements in natural language understanding could enable nuanced conversations, personalized education could become a reality, and creative collaboration could reach unprecedented heights. This orchestra is just beginning its performance, and the future holds a symphony of possibilities waiting to be composed. In short, The key lies in understanding their technical nuances, recognizing their individual strengths, and fostering responsible development. 


Without contact prints or finger detail photos, how can an attacker hope to get any fingerprint data to enhance MasterPrint and DeepMasterPrint dictionary attack results on user fingerprints? One answer is as follows: the PrintListener paper says that “finger-swiping friction sounds can be captured by attackers online with a high possibility.” The source of the finger-swiping sounds can be popular apps like Discord, Skype, WeChat, FaceTime, etc. Any chatty app where users carelessly perform swiping actions on the screen while the device mic is live. Hence the side-channel attack name – PrintListener. ... To prove the theory, the scientists practically developed their attack research as PrintListener. In brief, PrintListener uses a series of algorithms for pre-processing the raw audio signals which are then used to generate targeted synthetics for PatternMasterPrint. Importantly, PrintListener went through extensive experiments “in real-world scenarios,” and, as mentioned in the intro, can facilitate successful partial fingerprint attacks in better than one in four cases, and complete fingerprint attacks in nearly one in ten cases.


ClickHouse: Scaling Log Management with Managed Services

A viable solution emerges in the merging of the advantages of open-source tools with the efficiency of managed services. This combination effectively addresses scalability and cost concerns, while upholding the operational efficiency required. Striking this balance between functionality, cost, and effort is particularly critical for teams constrained by budget and limited engineering resources. To illustrate this approach, consider specific log management strategies, such as the one implemented by DoubleCloud, which embody these principles. DoubleCloud, for instance, employs services like ClickHouse for data transfer and visualization, effectively managing substantial log volumes within a modest budget. ClickHouse is renowned for its efficient data compression techniques, serving as a prime example of how open source tools, when properly managed, can significantly enhance log management processes. This scenario provides a practical demonstration of how the integration of open source benefits with managed services can offer optimal solutions to the challenges previously discussed.


4 hidden risks of your enterprise cloud strategy

Cloud vendors themselves can encounter any number of business-related issues that can challenge their ability to provide service to the standard enterprise CIOs committed to when the contract was signed, including the introduction of new risks. ... Many enterprise IT executives see the cloud as delivering near-infinite scalability — something that is not mathematically true. This is not helped by cloud marketing, which strongly implies — if not outright promises — unlimited scalability. Most of the time, the cloud’s elasticity affords great levels of scalability for its tenets. When emergency strikes, however, all bets are off, says Charles Blauner, operating partner and CISO in residence at cybersecurity investment firm Team8, and former CISO for Citigroup, Deutsche Bank, and JP Morgan Chase. ... “CIOs believe that by using multiple cloud providers, they think that it is improving availability, but it’s not. All it’s doing is increasing complexity, and complexity has always been the enemy of security,” Winckless says. “It is far more cost-effective to use the cloud provider’s zones.” Enterprises also often fall short on the financial and efficiency benefits promised by the cloud because they are unwilling to trust the cloud environment’s mechanisms sufficiently — or so argues Rich Isenberg, a partner at consulting firm McKinsey who oversees their cybersecurity strategy practice.


Data Governance in the Era of Generative AI

GenAI accelerates trends already evident with traditional AI: the importance of data quality and privacy, growing focus on responsible and ethical AI, and the emergence of AI regulations. This will create both new challenges and opportunities for DG. ... Traditional DG processes provide a well-trodden path for proper management and usage of data across organizations: discover and classify data to identify critical/sensitive data; map the data to policies and other business context; manage data access and security; manage privacy and compliance; and monitor and report on effectiveness. Similarly, as DG frameworks expand to support AI governance, they have an important role to play across the GenAI/LLM value chain. ... Traditional AI/ML will continue to be critical for automating and scaling various DG processes. These include data classification; associating policy and business context with data; and detecting anomalies/issues and creating and applying data quality rules to fix them. Building on these capabilities, GenAI has the potential to turbocharge data democratization and drive dramatic gains in productivity for data teams.



Quote for the day:

"You may be good. You may even be better than everyone esle. But without a coach you will never be as good as you could be." -- Andy Stanley

Daily Tech Digest - February 19, 2024

Why artificial general intelligence lies beyond deep learning

Decision-making under deep uncertainty (DMDU) methods such as Robust Decision-Making may provide a conceptual framework to realize AGI reasoning over choices. DMDU methods analyze the vulnerability of potential alternative decisions across various future scenarios without requiring constant retraining on new data. They evaluate decisions by pinpointing critical factors common among those actions that fail to meet predetermined outcome criteria. The goal is to identify decisions that demonstrate robustness — the ability to perform well across diverse futures. While many deep learning approaches prioritize optimized solutions that may fail when faced with unforeseen challenges (such as optimized just-in-time supply systems did in the face of COVID-19), DMDU methods prize robust alternatives that may trade optimality for the ability to achieve acceptable outcomes across many environments. DMDU methods offer a valuable conceptual framework for developing AI that can navigate real-world uncertainties. Developing a fully autonomous vehicle (AV) could demonstrate the application of the proposed methodology. The challenge lies in navigating diverse and unpredictable real-world conditions, thus emulating human decision-making skills while driving. 


Bouncing back from a cyber attack

In the case of a cyber attack, the inconceivable has already happened – all you can do now is bounce back. The big picture issue is that too often IoT (internet of things) networks are filled with bad code, poor data practices, lack of governance, and underinvestment in secure digital infrastructure. Due to the popularity and growth of IoT, manufacturers of IoT devices spring up overnight promoting products that are often constructed using lower-quality components and firmware, which can have sometimes well-known vulnerabilities exposed due to poor design and production practices. These vulnerabilities are then introduced to a customer environment increasing risk and possibly remaining unidentified. So, there’s a lot of work to do, including creating visibility over deep, widely connected networks with a plethora of devices talking to each other. All too often, IT and OT networks run on the same flat network. For these organisations, many are planning segmentation projects, but they are complex and disruptive to implement, so in the meantime companies want to understand what's going on in these environments and minimise disruption in the event of an attack.


Diversity, Equity, and Inclusion for Continuity and Resilience

As continuity professionals, the average age tends to skew older, so how do we continue to bring new people to the fold to ensure they feel like they can learn and be respected in the industry? Students need to be made aware this is an industry they can step into. Unfortunately, many already have experience seeing active shooter drills as the norm. They may have never organized one, but they have participated in many of these drills in school. Why not take advantage of that experience for the students who are interested in this field? Taking their advice could make exercising like active shooter or weather events less traumatic. Listening to their experience – doing it for at least 13 years – gives them a lot of insight from even Millennials who grew up at the forefront of school shootings, but not actively exercising what to do if it happens while in school. These future colleagues’ insights could change how we do specific exercises and events to benefit everyone. Still, there must be openness to new and fresh ideas and treating them with validity instead pushing them off due to their age and experience. Similarly, people with disabilities have always been vocal about their needs. 


AI’s pivotal role in shaping the future of finance in 2024 and beyond

As AI becomes more embedded in the financial fabric, regulators are crafting a nuanced framework to ensure ethical AI use. The Reserve Bank of India (RBI) and the Securities and Exchange Board of India (SEBI) have initiated guidelines for responsible AI adoption, emphasising transparency, accountability, and fairness in algorithmic decision-making processes. While the benefits are palpable, challenges persist. The rapid pace of AI integration demands a strategic approach to ensure a safe, financial eco-system ... The evolving nature of jobs due to AI necessitates a concerted effort towards upskilling the workforce. A McKinsey Global Institute report indicates that approximately 46% of India’s workforce may undergo significant changes in their job profiles due to automation and AI. To address this, collaborative initiatives between the government, educational institutions, and the private sector are imperative to equip the workforce with the requisite skills for the future. ... The Reserve Bank of India (RBI) and the Securities and Exchange Board of India (SEBI) have recognised the need for ethical AI use in the financial sector. Establishing clear guidelines and frameworks for responsible AI governance is crucial. 


How to proactively prevent password-spray attacks on legacy email accounts

Often with an ISP it’s hard to determine the exact location from which a user is logging in. If they access from a cellphone, often that geographic IP address is in a major city many miles away from your location. In that case, you may wish to set up additional infrastructure to relay their access through a tunnel that is better protected and able to be examined. Don’t assume the bad guys will use a malicious IP address to announce they have arrived at your door. According to Microsoft, “Midnight Blizzard leveraged their initial access to identify and compromise a legacy test OAuth application that had elevated access to the Microsoft corporate environment. The actor created additional malicious OAuth applications.” The attackers then created a new user account to grant consent in the Microsoft corporate environment to the actor-controlled malicious OAuth applications. “The threat actor then used the legacy test OAuth application to grant them the Office 365 Exchange Online full_access_as_app role, which allows access to mailboxes.” This is where my concern pivots from Microsoft’s inability to proactively protect its processes to the larger issue of our collective vulnerability in cloud implementations. 


How To Implement The Pipeline Design Pattern in C#

The pipeline design pattern in C# is a valuable tool for software engineers looking to optimize data processing. By breaking down a complex process into multiple stages, and then executing those stages in parallel, engineers can dramatically reduce the processing time required. This design pattern also simplifies complex operations and enables engineers to build scalable data processing pipelines. ...The pipeline design pattern is commonly used in software engineering for efficient data processing. This design pattern utilizes a series of stages to process data, with each stage passing its output to the next stage as input. The pipeline structure is made up of three components: The source: Where the data enters the pipeline; The stages: Each stage is responsible for processing the data in a particular way; The sink: Where the final output goes Implementing the pipeline design pattern offers several benefits, with one of the most significant benefits in efficiency of processing large amounts of data. By breaking down the data processing into smaller stages, the pipeline can handle larger datasets. The pattern also allows for easy scalability, making it easy to add additional stages as needed. 


Accuracy Improves When Large Language Models Collaborate

Not surprisingly, this idea of group-based collaboration also makes sense with large language models (LLMs), as recent research from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is now showing. In particular, the study focused on getting a group of these powerful AI systems to work with each other using a kind of “discuss and debate” approach, in order to arrive at the best and most factually accurate answer. Powerful large language model AI systems, like OpenAI’s GPT-4 and Meta’s open source LLaMA 2, have been attracting a lot of attention lately with their ability to generate convincing human-like textual responses about history, politics and mathematical problems, as well as producing passable code, marketing copy and poetry. However, the tendency of these AI tools to “hallucinate”, or come up with plausible but false answers, is well-documented; thus making LLMs potentially unreliable as a source of verified information. To tackle this problem, the MIT team claims that the tendency of LLMs to generate inaccurate information will be significantly reduced with their collaborative approach, especially when combined with other methods like better prompt design, verification and scratchpads for breaking down a larger computational task into smaller, intermediate steps.


There's AI, and Then There's AGI: What You Need to Know to Tell the Difference

For starters, the ability to perform multiple tasks, as an AGI would, does not imply consciousness or self-will. And even if an AI had self-determination, the number of steps required to decide to wipe out humanity and then make progress toward that goal is too many to be realistically possible. "There's a lot of things that I would say are not hard evidence or proof, but are working against that narrative [of robots killing us all someday]," Riedl said. He also pointed to the issue of planning, which he defined as "thinking ahead into your own future to decide what to do to solve a problem that you've never solved before." LLMs are trained on historical data and are very good at using old information like itineraries to address new problems, like how to plan a vacation. But other problems require thinking about the future. "How does an AI system think ahead and plan how to eliminate its adversaries when there is no historical information about that ever happening?" Riedl asked. "You would require … planning and look ahead and hypotheticals that don't exist yet … there's this big black hole of capabilities that humans can do that AI is just really, really bad at."


Metaverse and the future of product interaction

As the metaverse continues to evolve, so must the approach to product design. This includes considering how familiar objects can be repurposed as functional interface elements in a virtual environment. Additionally, understanding the dynamics of group interactions in virtual spaces is crucial. Designers must anticipate these trends and adapt their designs accordingly, ensuring that products remain relevant and engaging in the ever-changing landscape of the metaverse. In India, the metaverse presents significant opportunities for businesses to redefine consumer experiences. It opens up possibilities for more interactive, personalised, and adventurous engagements with customers. This not only increases customer engagement and loyalty but also creates new avenues for value exchange and revenue streams. The metaverse, with its potential to impact diverse sectors like communications, retail, manufacturing, education, and banking, is poised to be a game-changer in the Indian market. ... As the metaverse continues to expand its reach and influence, businesses and designers in India and around the world must evolve to meet the demands of this new digital era.


Build trust to win out with genAI

Businesses need to adopt ‘responsible technology’ practices, which will give them a powerful lever that enables them to deploy innovative genAI solutions while building trust with consumers. Responsible tech is a philosophy that aligns an organization’s use of technology to both individuals’ and society’s interests. It includes developing tools, methodologies, and frameworks that observe these principles at every stage of the product development cycle. This ensures that ethical concerns are baked in at the outset. This approach is gaining momentum, as people realize how technologies such as genAI, can impact their daily lives. Even organizations such as the United Nations are codifying their approach to responsible tech. Consumers urgently want organizations to be responsible and transparent with their use of genAI. This can be a challenge because, when it comes to transparency, there are a multitude of factors to consider, including everything from acknowledging AI is being used to disclosing what data sources are used, what the steps were taken to reduce bias, how accurate the system is, or even the carbon footprint associated with the genAI system.



Quote for the day:

"Entrepreneurs average 3.8 failures before final success. What sets the successful ones apart is their amazing persistence." -- Lisa M. Amos