Daily Tech Digest - December 09, 2023

AI in Biotechnology: The Big Interview with Dr Fred Jordan, Co-Founder of FinalSpark

Of course, the ethical consideration is increased because we are using human cells. From an ethical perspective, what is interesting is that all this wouldn’t be possible without the ISPCs. Ethically, we don’t need to take the brain of a real human being to conduct experiments. ... The ultimate goal is to develop machines with a form of intelligence. We want to create a real function, something useful. Imagine inputting a picture to the organoid, and it responds, recognizing objects like cats or dogs. Right now, we are focusing on one specific function – the significant reduction in energy consumption, potentially millions to billions of times less than digital computers. As a result, one practical application could be cloud computing, where these neuron-based systems consume significantly less energy. This offers an eco-friendly alternative to traditional computing processing. Ultimately, the future of AI in biotechnology holds huge potential for various applications because it’s a completely new way of looking at neurons. It’s like the inventors of the transistor not knowing about the internet.


AI regulatory landscape and the need for board governance

“We all need to have a plan in place, and we need to be thinking about how are you using it and whether it is safe.” She underscored the urgency, noting that journalists are investigating where AI has gone wrong and where it’s discriminating against people. Additionally, there are lawyers who seize potential litigation opportunities against ill-prepared, deep-pocketed organizations. "Good AI hygiene is non-negotiable today, and you must have good oversight and best practices in place," she asserted. Despite a lack of comprehensive Congressional AI legislation, Vogel clarified that AI is not without oversight. Four federal agencies recently committed to ensuring fairness in emerging AI systems. In a recent statement, agency leaders committed to using their enforcement powers if AI perpetuates unlawful bias or discrimination. AI regulatory bills have been proposed by over 30 state legislatures, and the international community is also ramping up efforts. Vogel cited the European Union's AI Act as the AI equivalent of the GDPR bill, which established strict data privacy regulations affecting companies worldwide.


Data Management, Distribution, and Processing for the Next Generation of Networks

Investments in cloud architectures by CSPs span their own resources – but they also extend to third parties; federated cloud architectures are the result. These interconnected cloud assets allow CSPs to extend their reach, share resources and collaborate with other stakeholders to secure desired outcomes. Why do we combine this with edge computing? Because resources at the edge may not be in the CSP’s own domain. Edge systems may be a combination of CSP-owned and other resources that are used in parallel to deliver a particular service. And, regardless of overall pace towards 5G SA, edge computing is now firmly in demand by enterprises (and CSPs), to support a new generation of high-performance and low latency services. This demand won’t only be served by CSPs, however. Many enterprises are seeking to deploy private networks – and the resources required to support their applications may be accessed via federated clouds. This user may not need its own UPF, but it may benefit from one offered by another provider in an adjacent edge location, or delivered by a systems integrator that runs multiple private networks with shared resources, available on demand.


Understanding Each Link of the Cyberattack Impact Chain

There are two ways to assess the cyberattack impact chain: Causes and effects. To build stakeholder support for CSAT, CISOs have to show the board how much damage cyberattacks are capable of causing. Beyond the fact that the average cost of a data breach reached an all-time high of $4.45 million in 2023, there are many other repercussions: Disrupted services and operations, a loss of customer trust and a heightened risk of future attacks. CSAT content must inform employees about the effects of cyberattacks to help them understand the risks companies face. It’s even more important for company leaders and employees to have a firm grasp on the causes of cyberattacks. Cybercriminals are experts at exploiting employees’ psychological vulnerabilities – particularly fear, obedience, craving, opportunity, sociableness, urgency and curiosity – to steal money and credentials, break into secure systems and launch cyberattacks. Consider the MGM attack, which relied on vishing – one of the most effective social engineering tactics, as it allows cybercriminals to impersonate trusted entities to deceive their victims.


Another Cyberattack on Critical Infrastructure and the Outlook on Cyberwarfare

Critical infrastructure attacks, like the one against the water authority in Pennsylvania, have occurred in the wake of the Israel-Hamas war. And geopolitical tension and turmoil expands beyond this conflict. Russia’s invasion of Ukraine has sparked cyberattacks. Chinese cyberattacks against government and industry in Taiwan have increased. “This is just going to be an ongoing part of operating digital systems and operating with the internet,” Dominique Shelton Leipzig, a partner and member of the cybersecurity and data privacy practice at global law firm Mayer Brown, tells InformationWeek. While kinetic weapons are still very much a part of war, cyberattacks are another tool in the arsenal. Successful cyberattacks against critical infrastructure have the potential for widespread devastation. “The landscape of warfare is changing,” says Warner. And the weaponization of artificial intelligence is likely to increase the scale of cyberwarfare. “We have the normal technology that we use for denial-of-service attacks, but imagine being able to do all of that on an even greater scale,” says Shelton Leipzig.


Continuous Testing in the Era of Microservices and Serverless Architectures

Continuous testing is a practice that emphasizes the need for testing at every stage of the software development lifecycle. From unit tests to integration tests and beyond, this approach aims to detect and rectify defects as early as possible, ensuring a high level of software quality. It extends beyond mere bug detection and it encapsulates a holistic approach. While unit tests can scrutinize individual components, integration tests can evaluate the collaboration between diverse modules. The practice allows not only the minimization of defects but also the robustness of the entire system. ... Decomposed testing strategies are key to effective microservices testing. This approach advocates for the examination of each microservice in isolation. It involves a rigorous process of testing individual services to ensure their functionality meets specifications, followed by comprehensive integration testing. This methodical approach not only identifies defects at an early stage but also guarantees seamless communication between services, aligning with the modular nature of microservices.


Understanding Master Data Management’s integration challenges

The integration of data within MDM is a very complex task, which should not be underestimated. Many organizations often have a myriad of source systems, each with its own data structure and format. These systems can range from commercial CRM or ERP systems to custom-built legacy software, all of which may use different data models, definitions, and standards. In addition, organizations often desire real-time or near-real-time synchronization between the MDM system and the source systems. Any changes in the source systems need to be immediately reflected in the MDM system to ensure data accuracy and consistency. Using a native connector from the MDM system to read data from your operational systems can provide several benefits, such as ease of integration. This has been illustrated at the bottom in the image above. However, the choice of using a native connector or a custom-built one mostly depends on your specific needs, the complexity of your data, the systems you’re integrating, and the capabilities of your MDM system.


Aim for a modern data security approach

Beginning with data observability, a “shift left” implementation requires that data security become the linchpin before any application is put into production. Instead of being confined to data quality or data reliability, security needs to become another use case application of the underlying data and be unified into the rest of the data observability subsystem. By doing this, data security benefits from the alerts and notifications stemming from data observability offerings. Data governance platform capabilities typically include business glossaries, catalogs, and data lineage. They also leverage metadata to accelerate and govern analytics. In “shift left” data governance, the same metadata is augmented by data security policies and user access rights to further increase trust and allow appropriate users to access data. Leveraging and establishing comprehensive data observability and governance is the key to data democratization. As a result, these proactive and transparent views over the security of critical data elements will also accelerate application development and improve productivity.


Google expands minimum security guidelines for third-party vendors

"The expanded guidance around external vulnerability protection aims to provide more consistent legal protection and process to bug hunters that want to protect themselves from being prosecuted or sued for reporting findings," says Forester Principal Analyst Sandy Carielli. "It also helps set expectations about how companies will work with researchers. Overall, the expanded guidance will help build trust between companies and security researchers." The enhanced guidance encourages more comprehensive and responsible vulnerability disclosures, says Jan Miller, CTO of threat analysis at OPSWAT, a threat prevention and data security company. "That contributes to a more secure digital ecosystem, which is especially crucial in critical infrastructure sectors where vulnerabilities can have significant repercussions," he says. ... The enhanced guidance encourages more comprehensive and responsible vulnerability disclosures, says Jan Miller, CTO of threat analysis at OPSWAT, a threat prevention and data security company. 


Europe Reaches Deal on AI Act, Marking a Regulatory First

"Europe has positioned itself as a pioneer, understanding the importance of its role as global standard setter," said Thierry Breton, the European commissioner for internal market, who had a key role in negotiations. The penalties for noncompliance with the rules can lead to fines of up to 7% of global revenue, depending on the violation and size of the company. What the final regulation ultimately requires of AI companies will be felt globally, a phenomenon known as the Brussels effect since the European Union often succeeds in approving cutting-edge regulations before other jurisdictions. The United States is nowhere near approving a comprehensive AI regulation, leaving the Biden administration to rely on executive orders, voluntary commitments and existing authorities to combat issues such as bias, deep fakes, privacy and security. European officials had no difficulty in agreeing that the regulation should ban certain AI applications such as social scoring or that regulations should take a tiered-based approach that treats high-risk systems, such as those that could influence the outcome of an election, with greater requirements for transparency and disclosure.



Quote for the day:

''It is never too late to be what you might have been." -- George Eliot

Daily Tech Digest - December 07, 2023

Top 5 Trends in Cloud Native Software Testing in 2023

As digital threats become more sophisticated, there’s a heightened focus on security testing, particularly among large enterprises. This trend is about integrating security protocols right from the initial stages of development. Tools that do SAST and DAST are becoming essentials in testing workflows. ... The TestOps trend integrates testing into the continuous development cycle, echoing the collaborative and automated ethos of DevOps. TestOps focuses on enhancing communication between developers, testers, and operations, ensuring continuous testing and quicker feedback loops. It leverages real-time analytics to refine testing strategies, ultimately boosting software quality and efficiency. Extending the principles of DevOps, GitOps uses Git repositories as the backbone for managing infrastructure and application configurations, including testing frameworks. ... The rise of ephemeral test environments is a game-changer. These environments are created on demand and are short-lived, providing a cost-effective way to test applications in a controlled environment that closely mirrors production


Dump C++ and in Rust you should trust, Five Eyes agencies urge

Microsoft, CISA observes in its guidance, has acknowledged that about 70 percent of its bugs (CVEs) are memory safety vulnerabilities, with Google confirming a similar figure for its Chromium project and that 67 percent of zero-day vulnerabilities in 2021 were memory safety flaws. Given that, CISA is advising that organizations move away from C/C++ because, even with safety training (and ongoing efforts to harden C/C++ code), developers still make mistakes. "While training can reduce the number of vulnerabilities a coder might introduce, given how pervasive memory safety defects are, it is almost inevitable that memory safety vulnerabilities will still occur," CISA argues. ... Bjarne Stroustrup, creator of C++, has defended the language, arguing that ISO-compliant C++ can provide type and memory safety, given appropriate tooling, and that Rust code can be implemented in a way that's unsafe. But that message hasn't done much to tarnish the appeal of Rust and other memory safe languages. CISA suggests that developers look to C#, Go, Java, Python, Rust, and Swift for memory safe code.


How the insider has become the no.1 threat

For the organisation, this means the insider threat has not only become more pronounced but harder to counter. It requires effective management on two fronts in terms of managing the remote/mobile workforce and dissuading employees from swapping cash for credentials/data. For these reasons, businesses need to reinforce the security culture through staff awareness training and step up their policy enforcement, in addition to applying technical controls to ensure data is protected at all times. That’s not what is happening today. The Apricorn survey found only 14% of businesses control access to systems and data when allowing employees to use their own equipment remotely, a huge drop from 41% in 2022. Nearly a quarter require employees to seek approval to use their own devices, but they do not then apply any controls once that approval has been granted. Even more concerning is that the number of organisations that don’t require approval or apply any controls has doubled over the past year. This indicates a hands-off approach that assumes a level of implicit trust, directly contributing to the problem of the insider threat.


WestRock CIDO Amir Kazmi on building resiliency

There are three leadership principles I would highlight that help build resilience in the team. First is recognizing the pace of change and responding to the impact it has on a team. It’s not getting slower; it’s getting faster. One of the behaviors that can help your team is to ‘explain the why.’ Set the context before the content behind what needs to be accomplished so we’re all on the same journey. Second is recognizing that we have to instill a learning and growth mindset in the culture, in the leadership, and in the fabric of what we’re trying to achieve. Many businesses are shifting their business models from product to service, and as leaders, it’s important to build a level of learning in that journey for your teams. One of the leaders that I admire and have learned from is John Chambers, who has said, ‘It’s all about speed of innovation and changing the way you do business.’ If we don’t reimagine ourselves, we will get disrupted. Third is transparency around what the key priorities are — because not everything can be a priority — and then creating flexibility around those priorities and how we get to the outcomes.


AI Governance in India: Aspirations and Apprehensions

While India’s stance on AI regulation has sometimes appeared to waver, it is steadily working towards establishing a clear regulatory approach and AI governance mechanism, especially as the country assumes a more prominent role in the area of AI-related international cooperation. AI-enabled harms and security threats exist at all three levels of the AI stack: At the hardware level, there are vulnerabilities in the physical infrastructure of AI systems. At a foundational model level, there are concerns around the use of inappropriate datasets, data poisoning, and issues related to data collection, storage, and consent. At the application level, there are threats to sensitive and confidential information as well as the proliferation of capability-enhancing tools among malicious actors. Therefore, while the governance of the tech stack is a priority, governance of the organisations developing AI solutions, or the people behind the technology, could also be productive. Even as democratisation has made AI more accessible, assigning responsibility and defining accountability for the operation of AI systems have become more difficult. 


Liability Fears Damaging CISO Role, Says Former Uber CISO

The average person on the street would think it reasonable that a CISO should be responsible for all aspects of an organization’s security, Sullivan acknowledged. However, the reality is the CISO role is unique among executive positions. “The CISO is fighting an uphill fight every day in their job. They’re begging for resources, they’re trying to get the rest of the company to slow down and think about the things they care about,” he noted. “Our job is different from everybody else’s. When you’re the executive responsible for security, you are the only executive who has active adversaries outside your organization trying to destroy you,” he added. ... Despite the growing personal risks for CISOs, Sullivan emphasized that “we should not run away from the situation,” adding that “if we do, we’ll miss a huge opportunity.” He believes there is a fundamental shift coming in terms of the regulation that’s on the horizon in cybersecurity, which will force organizations to revise how they approach security, and current security professionals must be to facilitate this change.


Middle East CISOs Fear Disruptive Cloud Breach

Data sovereignty regulations and de-globalization trends, for example, have led to the deployment of multi-cloud infrastructures that can support regional regulations and business mandates, according to the March research report, The Future of Cloud Security in the Middle East. "You will have your own cloud service provider within each country and already countries are adopting that culture — be it in the UAE or Saudi Arabia or any other country in the region," Rajesh Yadla, director head of information security for Al Hilal Bank, stated in that report. "The reason is to make sure that the cloud service providers are compliant with all these regulations." Business and government leaders have taken cybersecurity seriously, however, with security the top factor in choosing a cloud provider, with 43% of companies prioritizing security, compared to 19% prioritizing cost, according to the report. Both Saudi Arabia and the UAE rank in the top 10 nations for cybersecurity, as measured by the Global Cybersecurity Index 2020, the most recent cybersecurity rankings of countries across the globe compiled by the International Telecommunication Union (ITU).


Parenting in the Digital Age: A Guide to Choosing Tech-Enabled Preschools

In recent years, technology integration in preschoolers’ education has become a game-changer in delivering personalised learning. By making education more fun and interactive by using a robust arsenal – AR applications, ERP apps and much more, teachers and parents have been able to tap into the receptivity of young minds, paving the way for both cognitive and emotional development. Augmented Reality (AR) being an interactive experience assimilates the real world and computer-generated content. Additionally, it stimulates multiple sensory modalities, making a successful mark in opening up new avenues in preschool education. By allowing young learners to immerse in realistic experiences, AR elevates the learning process with computer simulations, 3D virtualisation, etc. making it enhanced, effective and evocative. Departing from the traditional chalkboard and chart paper educational approach for preschoolers, parents have seismically shifted their preference to a tech-integrated curriculum. The augment of AR technology for early childhood learning brings forth a layer of interactive and engaging experiences. 


Cyber Strategic Ambivalence Will Hit A Tipping Point In 2024

There are indications that technological advances, geopolitics, social influences, and other externalities are creating the conditions for what Thomas Kuhn coined the “paradigm shift” (his 1962 book, The Structure of Scientific Revolutions, described the dynamics and the framework by which structural change emerges). The conditions for change that will result in a paradigm shift are the breadth, types and severity of attacks that are ongoing and will likely increase in 2024. The assessed global cyberattack losses in 2023 amount to $8 trillion, which is larger than any national economy except for the US and China! In other words, the collective black market – the illicit profits generated from cybercrime – is a larger economy than Germany or Japan or India. That is a look at the problem in monetary terms. Cyberattacks are now regularly compromising critical infrastructure, which places public safety at risk. In May of 2023, Denmark’s critical infrastructure network experienced the largest cyberattack ever, which was highly coordinated and could have resulted in power outages. 


How server makers are surfing the AI wave

There appears to be strong demand for high performance computing (HPC) hardware that includes graphics processing units (GPUs) for accelerating the performance of workloads and GPU-based servers. ... There is a growing realisation among many businesses that the hyperscalers are behind the curve with regards to supporting the intellectual property of their GenAI users. This is opening up opportunities for specialist GPU cloud providers to offer AI acceleration in a way that allows customers to train foundational AI models based on their own data. Some organisations are also likely to buy and run private cloud servers configured as GPU farms for AI acceleration, fuelling the significant growth in demand for GPU-equipped servers from the major hardware providers. HPE recently announced an expanded strategic collaboration with Nvidia to offer enterprise computing for GenAI. HPE said the co-engineered, pre-configured AI tuning and inferencing hardware and software platform enables enterprises of any size to quickly customise foundation models using private data and deploy production applications anywhere.



Quote for the day:

''Your most unhappy customers are your greatest source of learning.'' -- Bill Gates

Daily Tech Digest - December 06, 2023

Three Ways Generative AI Is Overhauling Data Management

First, prioritize accuracy in SQL generation. NL2SQL has come a long way in understanding natural language queries, but some large language models (LLMs) are better than others in dealing with nuanced or complex questions. Second, ensuring efficient query execution on ad hoc questions is paramount. Historically, interactive querying in a data warehouse environment meant gathering requirements in advance and engineering the data through caching, denormalizing, and other techniques. Generative AI has changed expectations -- users now want immediate answers to novel questions. ... The shift towards vector embeddings is driven by the realization of the remarkable benefits they bring to storing and searching both structured and unstructured data as vectors. The core advantage of vector embeddings lies in their ability to represent complex data in an efficient format. By converting data into high-dimensional vectors, it becomes possible to capture the semantic relationships, context, and similarities between different data points.


Reinforcement learning is useful in environments where precise reward functions can guide the learning process. It’s particularly effective in optimal control scenarios, gaming and aligning large language models (LLMs) with human preferences, where the goals and rewards are clearly defined. Robotics problems, with their complex objectives and the absence of explicit reward signals, pose a significant challenge for traditional RL methods. ... Despite its advantages, imitation learning is not without its pitfalls. A notable issue is the “distribution mismatch problem,” where an agent may encounter situations outside the scope of its training demonstrations, leading to a decline in performance. “Interactive imitation learning” mitigates this problem by having experts provide real-time feedback to refine the agent’s behavior after training. This method involves a human expert monitoring the agent’s policy in action and stepping in with corrective demonstrations whenever the agent strays from the desired behavior.


Don’t make Apache Kafka your database

The right strategy is to let Kafka do what it does best, namely ingest and distribute your events in a fast and reliable way. For example, consider an ecommerce website with an API that would traditionally save all data directly to a relational database with massive tables—with poor performance, scalability, and availability as the result. Introducing Kafka, we can design a superior event-driven ecosystem and instead push that data from the API to Kafka as events. This event-driven approach separates processing into separate components. One event might consist of customer data, another may have order data, and so on—enabling multiple jobs to process events simultaneously and independently. This approach is the next evolution in enterprise architecture. We’ve gone from monolith to microservices and now event-driven architecture, which reaps many of the same benefits of microservices with higher availability and more speed. Once events are sitting in Kafka, you have tremendous flexibility in what you do with them. If it makes sense for the raw events to be stored in a relational databases, use an ecosystem tool like Kafka Connect to make that easy.


What it Takes to Be Your Organisation’s DPO or Data Privacy Lead

Just because you sought expert opinion on the matter a few years ago doesn’t mean you’re in the clear. ‘Once compliant’ doesn’t mean ‘still compliant’. It’s possible that you now need to appoint a DPO (data protection officer) or data privacy lead to be the single point of contact for questions, concerns, breaches, impact assessments or communication with the regulatory authorities. ... It’s not just the EU GDPR, the UK GDPR and the DPA 2018 that we may need to ensure compliance with. Privacy laws exist in almost every country and are relevant wherever you do business. You can design your data privacy systems such that they meet all these legal requirements. ... A DPO isn’t just a trusted adviser during business-as-usual times. They are at the command centre of a cross-functional team in tough times. Faced with an incident or a breach, a well-trained DPO can avert a crisis before social media can cause a catastrophe. Well-versed data privacy leads and DPOs can leap into action when needed, swiftly addressing and remediating issues, reporting to the necessary authorities and instigating lasting change. 


What should be in a company-wide policy on low-code/no-code development

The lesson here is to be thorough in assessment and then document and define existing use cases. Understanding why certain user groups are currently leveraging a particular low-code/no-code platform will help security and business leaders make risk calculations that will determine the course of future policies. The most immediate policy that will come out of this work will be one that defines acceptable use cases for low-code/no-code across the business. “Specify the application of low-code and no-code development across departments, as well as clearly state the purpose of low-code and no-code development,” says Vikas Kaushik, CEO of TechAhead, a development consultancy. This policy of purpose and scope is crucial for setting the course and the tone of the risk management policies around use cases. Some companies may choose to be very granular about this, breaking it down by lines of business, business function, user groups, or teams. Others may simply just delineate between professional developers and so-called citizen developers — tech-savvy business stakeholders.


The Grim Reality of a Cyberattack: From Probability to Certainty

In the unfortunate event that an organization gets hacked, there are certain actions a cybersecurity team can take (or avoid) that significantly impact recovery time and cost. The first action is to report the incident to all relevant authorities, just as someone would declare a physical crime. Many organizations are legally obligated to report such instances and informing the authorities helps protect other enterprises from similar attacks. It is worth noting that authorities and regulators are not going to assign blame. They seek to learn valuable lessons from attacks and build hacker profiles that help minimize the consequences other organizations may face. In addition, enterprises should alert their cyberinsurance providers. This is often a prerequisite for filing a claim, and evidence must be presented to receive compensation. After the appropriate authorities have been notified, it is important that IT teams slow down and avoid making costly mistakes in haste.


AI revolutionising leadership talent identification

With leadership positions being critical for growth and sustenance, any bias in selecting C-suite people can have a damaging impact on the performance of organizations. AI ensures that the entire process of recruiting leaders remains objective, fair, and just. Unlike humans who are driven by feelings and emotions, AI algorithms solely select the candidates based on their skills, competencies, and qualifications. This leaves little room for any prejudice and allows firms to recruit diversified, dynamic, and vibrant individuals at the top echelons of the organization. ... Not only does AI help in recruiting top-level employees but also helps in predicting their engagement behaviours, attrition patterns, and potential switchover. By combining the employment records of candidates with their present level of engagement, AI can predict the attrition rate at the top leadership positions. For example, AI tools can alert employers about the sudden decrease in employees’ engagement levels or increase in their job search activity online. The information can be used by employers to enhance engagement with their employees, enhance their talent retention efforts, or devise a contingency plan in case of a sudden exit.


If You Want People to Follow You, Stop Being a Boss — 8 Steps to Truly Effective Leadership

The approach to mistakes and failures differentiates a leader from a boss. Where a boss might see a mistake as a failure to be criticized, a leader views it as an opportunity for growth. Positive reinforcement involves recognizing the effort, providing constructive feedback, and encouraging a mindset of continuous learning. This approach not only helps in skill development but also instills a sense of confidence and loyalty within the team, fostering a workplace culture where innovation is encouraged, and risks are viewed as steps towards growth. ... Empowerment is a key trait of effective leadership. It involves trusting the team's capabilities and allowing autonomy in their roles. This empowerment fosters a sense of ownership and responsibility among team members, leading to greater job satisfaction and innovation. In contrast, micromanagement can stifle creativity, lower morale and hinder productivity. Leaders who empower rather than micromanage find their teams are more motivated, creative, and ultimately more effective in achieving organizational goals.


Data governance and government: The need for effective and protective data management

Data governance in government involves establishing and enforcing policies, procedures, and standards to ensure the effective management, use, and protection of data. Several key issues and challenges are commonly faced in the context of data governance in government:Data privacy and security: Governments handle vast amounts of sensitive and personally identifiable information. Ensuring the privacy and security of this data is a paramount concern, especially in the face of increasing cyber threats and data breaches. Compliance with regulations: Governments must adhere to various regulations and compliance standards concerning data management, such as data protection laws, privacy regulations, and industry-specific requirements. Interoperability: Government agencies often operate with disparate systems and databases. Achieving interoperability and ensuring seamless data exchange among different agencies is a significant challenge impacting the efficiency and effectiveness of government services. 


Linus Torvalds on the state of Linux today and how AI figures in its future

Indeed, Torvalds hopes that AI might really help by being able "to find the obvious stupid bugs because a lot of the bugs I see are not subtle bugs. Many of them are just stupid bugs, and you don't need any kind of higher intelligence to find them. But having tools that warn more subtle cases where, for example, it may just say 'this pattern does not look like the regular pattern. Are you sure this is what you need?' And the answer may be 'No, that was not at all what I meant. You found an obvious bag. Thank you very much.' We actually need autocorrects on steroids. I see AI as a tool that can help us be better at what we do." But, "What about hallucinations?," asked Hohndel. Torvalds, who will never stop being a little snarky, said, "I see the bugs that happen without AI every day. So that's why I'm not so worried. I think we're doing just fine at making mistakes on our own." Moving on, Torvalds said, "I enjoy the fact that open source, the notion of openness, has gotten so much more widely accepted. I enjoyed it particularly because I remember what it was thirty years ago when I had started this project, and people would ask me, 'Why?'



Quote for the day:

"To be successful you must accept all challenges that come your way. You can't just accept the ones you like." -- Mike Gafka

Daily Tech Digest - December 05, 2023

Post-Quantum Cryptography: The lynchpin of future cybersecurity

Since we are still at least a decade away from an ideal quantum computer, this may not seem like an imminent threat. However, this is not the case, since Annealing quantum computers are already a reality. While these are not capable of utilising Shor’s algorithm, they can solve the factoring problem by formulating it as an optimization problem and have already made much progress. Furthermore, there is also the problem of “harvest now, decrypt later,” which essentially means that an attacker can steal data now, wait until quantum computers become a practical reality, and subsequently decrypt it at a later time. This implies that quantum computers already pose a very real threat, without even coming into existence. There is a distinct possibility that large amounts of data have already been compromised and the rectification of this problem is an immediate concern, which is why the incorporation of PQC into current encryption protocols is absolutely imperative. For instance, according to IBM’s “Cost of a data breach Report 2023,” more than 95 percent of the organisations studied globally have experienced more than one data breach.


Payments for net zero – How the payments industry can contribute towards decarbonisation

It is crucial to involve senior leaders in comprehending the compelling reasons for both commercial and societal urgency to decarbonise. Furthermore, gaining insight into how various stakeholders (ranging from employees, investors, and regulators to civil society) are progressively aligning with the necessity for businesses and society to undergo decarbonisation will fortify the approach. This alignment creates a potent mandate and a unique opportunity for the payment network to discern and investigate its distinct role in facilitating the transition toward net zero. ... Payment networks & fintechs should allocate sufficient resources to explore alignment between their core capabilities and sectors/systems needing to decarbonise. This may involve investing in sustainability and climate change expertise within core teams such as data, product innovation, and strategy. Additionally, conducting robust research on trends and carbon impacts in various economic sub-sectors can help overlay payment networks’ capacities to pinpoint net-zero solutions. Engaging with external stakeholders can also aid in identifying and testing potential opportunity areas.


How AI-assisted code development can make your IT job more complicated

Increased use of AI will also mean personalization becomes an important skill for developers. Today's applications "need to be more intuitive and built with the individual user in mind versus a generic experience for all," says Lobo. "Generative AI is already enabling this level of personalization, and most of the coding in the future will be developed by AI." Despite the rise of generative technology, humans will still be required at key points in the development loop to assure quality and business alignment. "Traditional developers will be relied upon to curate the training data that AI models use and will examine any discrepancies or anomalies," Lobo adds. Technology managers and professionals will need to assume more expansive roles within the business side to ensure that the increased use of AI-assisted code development serves its purpose. We can expect this focus on business requirements to lead to a growth in responsibility via roles such as "ethical AI trainer, machine language engineer, data scientist, AI strategist and consultant, and quality assurance," says Lobo. 


The more the CIO can function as a centralized source for technology resources, the better, says Ping Identity’s Cannava, who sees this transpiring in three phases, depending on the maturity of the organization. In Phase 1, the CIO is the clearinghouse for current technology projects, taking on the traditional role as in-house consultant. In Phase 2, the CIO becomes the clearinghouse for data within the organization. “In many cases, we are the keepers of the keys to datasets,” he says. “We have the ability to bring datasets together, and those insights could drive what the agenda is for the business. They could show us where we have the opportunity to improve our go-to-market. So having that access to the insights driving business intelligence initiatives has allowed us to expand our seat at the table.” In Phase 3, the CIO also becomes the clearinghouse for emerging technologies. Because, he says, to truly unlock the potential of all that data, you need artificial intelligence. And that raises some immediate questions for CIOs who want to be orchestrators. 


How DoorDash Migrated from Aurora Postgres to CockroachDB

Until the monolith was broken up, it offered a single view of the toll that demand for the application was taking on the databases. But once that monolith was broken into microservices, that visibility would disappear. “Our biggest enemy was the single primary architecture of our database,” Salvatori said. “And our North Star would be to move to a solution that offered multiple writers.” In the meantime, the DoorDash team adopted a “poor man’s solution,” approach to dealing with its overmatched database architecture, Salvatori told the Roachfest audience: building vertical federation of tables, while not blocking microservices extractions. In this game of “whack-a-mole,” he said, “Different tables would be able to get their own single writer and therefore scale a little bit and allow us to keep the lights on for a little bit longer. But we needed to take steps toward limitless horizontal scalability.” Cockroach, a distributed SQL database management system, seemed like the right answer.


Taming the Virtual Threads: Embracing Concurrency With Pitfall Avoidance

When a virtual thread needs to process a long computation, the virtual thread excessively occupies its carrier thread, preventing other virtual threads from utilizing that carrier thread. For example, when a virtual thread repeatedly performs blocking operations, such as waiting for input or output, it monopolizes the carrier thread, preventing other virtual threads from making progress. Inefficient resource management within the virtual thread can also lead to excessive resource utilization, causing monopolization of the carrier thread. Monopolization can have detrimental effects on the performance and scalability of virtual thread applications. It can lead to increased contention for carrier threads, reduced overall concurrency, and potential deadlocks. To mitigate these issues, developers should strive to minimize monopolization by breaking down lengthy computations into smaller, more manageable tasks to allow other virtual threads to interleave and utilize the carrier thread.


The all-flash datacentre: Mirage or imminent reality?

The initial advantage of flash over HDDs was speed. Flash was adopted in workstations and laptops, and in enterprise servers running performance-critical and especially I/O-dependent applications. Flash’s performance edge is greatest on random reads and writes. The gap is narrower for sequential read/write operations. A well-configured HDD array with flash-based caching comes close enough to all-flash speeds in real-world environments. “It does depend what infrastructure you have and what characteristics you are looking for from your storage,” says Roy Illsley, chief analyst of IT operations at Omdia. “That includes performance on read, on writes, capacity. The most appropriate [storage] for your needs could be flash, or just as equally spinning media. All flash datacentres may be a reality where workloads require the strength of flash, but I am not expecting all-flash datacentres to become commonplace.” According Rainer Kaise, senior manager of business development at Toshiba Electronics Europe – a hard drive manufacturer – 85% of the world’s online media is still stored on HDDs.


How cybersecurity teams should prepare for geopolitical crisis spillover

It is one thing to understand why geopolitical spillover impacts private enterprise but another to be able to assign any kind of probability of risk to them. Fortunately, research on global cyber conflict and enterprise cybersecurity provide a reasonable starting point for dealing with this uncertainty. Scholars and policy commentators are interested in linking the realities of cyber operations to situational risk profiles, particularly for non-degradation threats for which traditional security assessment processes tend to be sufficient. Performative attacks come with perhaps the most obvious set of threat indicators. Companies that are "named and shamed" during geopolitical crisis moments tend to have one of two characteristics. First, their symbolic profile is constitutionally indivisible in the context of the current conflict. This means that a firm from its statements, actions, or productions clearly underwrites one side in conflict. Media organizations that consistently toe a national line such as Russia's Pravda are an example of this, but so are firms with leaders or major stakeholders belonging to ethnic, religious, or linguistic backgrounds pertinent to a crisis


Can cloud computing be truly federated?

The core idea is to save money, but it requires accepting that the physical resources could be scattered in any system willing to be part of the federated cloud. I’m not going to think about this in silly ways, in that we’re going to take over someone’s smartwatch as a peer node, but there is a vast quantity of underutilized hardware out there, still running and connected to a network in an enterprise data center that could be leveraged for this model. The idea of a federated public cloud service does exist today at varying degrees of maturation, so please don’t send me an angry email telling me your product has been doing this for years and that I’m somehow a bad person for not knowing it existed. As I said, federation is an old architectural concept many have adopted. What is new is bringing it to a widely used public cloud computing platform, which we haven’t seen yet for the most part. In this approach, a centralized system coordinates the provisioning of traditional cloud services such as storage and computing between the requesting peer and a peer that could provide that service.


How AI is revolutionizing “shift left” testing in API security

SAST and DAST are well-established web application tools that can be used for API testing. However, the complex workflows associated with APIs can result in an incomplete analysis by SAST. At the same time, DAST cannot provide an accurate assessment of the vulnerability of an API without more context on how the API is expected to function correctly, nor can it interpret what constitutes a successful business logic attack. In addition, while security and AppSec teams are at ease with SAST and DAST, developers can find them challenging to use. Consequently, we’ve seen API-specific test tooling gain ground, enabling things like continuous validation of API specifications. API security testing is increasingly being integrated into the API security offering, translating into much more efficient processes, such as automatically associating appropriate APIs with suitable test cases. A major challenge with any application security test plan is generating test cases tailored explicitly for the apps being tested before release. 



Quote for the day:

"The first step toward success is taken when you refuse to be a captive of the environment in which you first find yourself." -- Mark Caine

Daily Tech Digest - December 04, 2023

Proactive, not reactive: the path to ensuring operational resilience in cybersecurity

Operational resilience goes beyond ensuring business continuity by mitigating disruptions as and when they occur. Resilience needs a proactive approach to maintaining stable and reliable digital systems, regardless of the severity of threat incidents. This "bankability" (excuse the pun) of the financial system is critical to preserving public trust and confidence in the global financial system. Given the interconnectedness of financial firms with external third parties, any plan for operational resilience needs to address multiple lines of communication, automated systems of interactions and information sharing, and a growing attack surface. ... The dependence of the financial sector on the telecom and energy industries, and the increasingly global nature of the sector means that operational resilience exercises need to not just be cross-border, but cross-sector too. Today, national or even global-level threats are a reality, emphasizing the need to include government partners in the exercises. After all, protecting critical private infrastructure safeguards a nation's financial stability.


Black-Box, Gray Box, and White-Box Penetration Testing: Importance and Uses

Grey-box penetration testing can simulate advanced persistent threat (APT) scenarios in which the attacker is highly sophisticated and operates on a longer time scale (CISA, 2023). In these types of attacks, the threat actor has collected a good deal of information about the target system—similar to a gray-box testing scenario. Grey-box penetration testing allows many organizations to strike the right balance between white-box and black-box testing. ... The main disadvantage of gray-box testing is that it can be too “middle-of-the-road” when compared with black-box or white-box testing. If organizations do not strike the right balance during gray-box testing, they may miss crucial insights that would have been found with a different technique. ... Black box, grey box, and white box testing are all valuable forms of penetration testing, each with its own pros, cons, and use cases. Penetration testers need to be familiar with the importance and use cases of each type of test to execute them most efficiently, using the right tools for each one.


The arrival of genAI could cover critical skills gaps, reshape IT job market

While genAI offers the promise of clear business benefits, education is key and collaboration with cybersecurity and risk experts is needed to help establish an environment where the technology can be used safely, securely, and productively, according to Emm. Hurdles to adopting AI persist. Those issues include high costs, uncertain return on investment (ROI), the need to upskill entire staffs, and potential exposure of sensitive corporate data to unfamiliar automation technology. Few organizations, however, have put appropriate safeguards in place to guard against some of genAI's most well-known flaws, such as hallucinations, exposure of corporate data, and data errors. Most are leaving themselves wide open to the acknowledged risks of using genAI, according to Kaspersky. For example, only 22% of C-level executives have discussed putting rules in place to regulate the use of genAI in their organizations — even as they eye it as a way of closing the skills gap. Cisco CIO Fletcher Previn, whose team is working to embed AI in back-end systems and products, said it's critical to have the policies, security, and legal guardrails in place to be able to "safely adopt and embrace AI capabilities other vendors are rolling out into other people’s tools.


State of Serverless Computing and Event Streaming in 2024

Traditional stream processing usually involves an architecture with many moving parts managing distributed infrastructure and using a complex stream processing engine. For instance, Apache Spark, one of the most popular processing engines, is notoriously difficult to deploy, manage, tune and debug (read more about the good, bad and ugly of using Spark). Implementing a reliable, scalable stream processing capability can take anywhere between a few days and a few weeks, depending on the use case. On top of that, you also need to deal with continuous monitoring, maintenance and optimization. You may even need a dedicated team to handle this overhead. All in all, traditional stream processing is challenging, expensive and time consuming. In contrast, serverless stream processing eliminates the headache of managing a complex architecture and the underlying infrastructure. It’s also more cost effective, since you pay only for the resources you use. It’s natural that serverless stream processing solutions have started to appear. 


The Glaring Gap in Your Cybersecurity Posture: Domain Security

Because domain names are used for marketing and brand initiatives, security teams may feel that protecting online domain names falls under the marketing or legal side of the business. Or, they may have left domain protection in the hands of their IT department. But, if organizations are unfamiliar with who their domain registrars even are, chances are they are unaware of the policies the registrars use and the security measures they have in place for branded, trademarked domains. Domain security should be an essential branch of cybersecurity, protecting brands online, but it is not always the highest priority for consumer-grade domain registrars. Unfortunately, adversaries are privy to the growth in businesses’ online presence and the often minimal attention given to domain security, leading them to take a special interest in targeting corporate and/or government domain names that are left exposed. Organizations will continue to find themselves in the path of a perfect storm for domain and DNS attacks and potential financial or reputational devastation if they continue to allow the build-up of blind spots in their security posture.


Put guardrails around AI use to protect your org, but be open to changes

While a seasoned CISO might recognize that the output from ChatGPT in response to a simple security question is malicious, it’s less likely that another member of staff will have the same antenna for risk. Without regulations in place, any employee could be inadvertently stealing another company’s or person’s intellectual property (IP), or they could be delivering their own company’s IP into an adversary’s hands. Given that LLMs store user input as training data, this could contravene data privacy regulations, including GDPR. Developers are using LLMs to help them write code. When this is ingested, it can reappear in response to a prompt from another user. There is nothing that the original developer can do to control this because the LLM was used to help create the code, making it highly unlikely that they can prove ownership of it. This might be mitigated by using a GenAI license which helps enterprises to guard against their code being used as an input for training. However, in these circumstances, imposing a “trust but verify” approach is a good idea.


Why Generative AI Threatens Hospital Cybersecurity — and How Digital Identity Can Be One of Its Greatest Defenses

Writing convincing deceptive messages isn’t the only task cyber attackers use ChatGPT for. The tool can also be prompted to build mutating malicious code and ransomware by individuals who know how to circumvent its content filters. It’s difficult to detect and surprisingly easy to pull off. Ransomware is particularly dangerous to healthcare organizations as these attacks typically force IT staff to shut down entire computer systems to stop the spread of the attack. When this happens, doctors and other healthcare professionals must go without crucial tools and shift back to using paper records, resulting in delayed or insufficient care which can be life-threatening. Since the start of 2023, 15 healthcare systems operating 29 hospitals have been targeted by a ransomware incident, with data stolen from 12 of the 15 healthcare organizations affected. This is a serious threat that requires serious cybersecurity solutions. And generative AI isn’t going anywhere — it’s only picking up speed. It is imperative that hospitals lay thorough groundwork to prevent these tools from giving bad actors a leg up.


15 Essential Data Mining Techniques

The essence of data mining lies in the fundamental technique of tracking patterns, a process integral to discerning and monitoring trends within data. This method enables the extraction of intelligent insights into potential business outcomes. For instance, upon identifying a sales trend, organizations gain a foundation for taking strategic actions to leverage this newfound insight. When it’s revealed that a specific product outperforms others within a particular demographic, this knowledge becomes a valuable asset. Organizations can then capitalize on this information by developing similar products or services tailored to the demographic or by optimizing the stocking strategy for the original product to cater to the identified consumer group. In the realm of data mining, classification techniques play a pivotal role by scrutinizing the diverse attributes linked to various types of data. By discerning the key characteristics inherent in these data types, organizations gain the ability to systematically categorize or classify related data. This process proves crucial in the identification of sensitive information


SolarWinds lawsuit by SEC puts CISOs in the hot seat

Without ongoing, open dialogue between these leaders, it’s impossible to guarantee complete awareness of the range of complications associated with potential cyber risks. Now that we’ve seen how these risks can easily extend beyond security concerns and into catastrophic financial and legal issues, it’s important that conversations about these risks are not taking place exclusively among CISOs. The roles and responsibilities of CISOs and other C-Suite executives vary dramatically, which can naturally result in siloed processes and priorities. However, to ensure alignment and effectively protect an organization from data breaches and legal recourse alike, it’s imperative that business leaders learn to “speak the same language” and share information to align their efforts and goals. CFOs and CISOs must collaborate to evaluate the relationships between cybersecurity incidents and legal risks. We can facilitate this by leveraging cyber risk quantification and management tools, which congregate data to calculate, quantify and translate information about threats and vulnerabilities into lay terms and easily digestible data.


CTO interview: Greg Lavender, Intel

“Our confidential computing capability is also a privacy-ensuring capability,” says Lavender. “Europe is ahead in this area, with the notion of sovereign clouds. Intel partners with some of the European governments on sovereign cloud using Intel’s platforms for confidential computing. The privacy-preserving capabilities are built into these platforms, which beyond government, will also be useful in regulated industries like financial services, healthcare and telcos.” “We also see a convergence in AI that will open up a big market for our privacy-ensuring software and hardware,” says Lavender. “You spend a lot of time prepping your data, tagging your data, getting your data ready for training, usage or inference usage. You want to do that securely in a multi-tenant environment. Our platforms give you the opportunity to do your training securely between the CPU and the GPU, and then you can deploy it securely in the cloud or at the edge.” “I’m talking with a lot of CIOs about this technology, because data is now such a valuable thing. It’s what you use to train your models. You don’t want somebody else to get access to that data because then they can use it to train their models and offer competing services.”



Quote for the day:

"Success is the progressive realization of predetermined, worthwhile, personal goals." -- Paul J. Meyer

Daily Tech Digest - December 03, 2023

The need to upskill India’s tech talent is critical. Why? Because India has perhaps the most to gain or lose when it comes to the impact of Generative AI. A survey by ServiceNow found that India faces a critical need to upskill 1.62 crore workers in AI and automation, creating 4.7 million new jobs in technology by 2027 to meet the nation’s skill deficit. ... Now, the question is, can Generative AI help train such a large young population? Yes! This technology can create personalized learning paths. With modules integrated with AI to optimize outcomes, students can learn better with real-time feedback and take advantage of a more customized learning experience. If India has to become a global economic superpower, engineers must become tech-agnostic and adaptable in a world that’s changing fast. A Generative AI layer must be integrated into their education modules. This will equip them with cutting-edge skills and ensure versatility – from software developers to prompt engineers, enabling them to navigate diverse technological domains.


5 Ways To Demonstrate Leadership Skills In A Team Meeting

Don't just speak to be heard; speak to provide tangible solutions. Leaders are always thinking about innovative ways to drive forward positive business outcomes, and it's your responsibility in these meetings to think of creative solutions to the challenges your team is facing. Even if you don't have the complete solution yet, make some recommendations with a "What if we tried XYZ approach?" This engages other team members and shows that you are confident with sharing your ideas, while simultaneously ensuring everyone is included and feels heard—a mark of a true leader. ... One of the most difficult and embarrassing situations any professional could be placed in is to acknowledge when they've made a mistake or accidentally jeopardized the success of the team project. But it's the bravest thing to do, and it's an essential quality of a rising leader to take ownership for your mistakes. Refuse to cast blame on others or talk behind your colleagues' back, because this can destroy trust. Instead seek ways to rectify the situation and actively discuss solutions.


How can AI and advanced analytics streamline due diligence processes in financial industries

In an era of increasing digital transactions, customer due diligence (CDD) demands robust identity verification processes. AI brings biometric data, document analysis, and identity validation methods to the forefront, enhancing the accuracy and speed of customer due diligence. OCR, face match, liveness detection, match logic, and digital address verification facilitate contactless KYC and paperless onboarding. These technologies not only streamline onboarding processes but also contribute to a more secure and fraud-resistant financial ecosystem. Staying compliant with an ever-changing regulatory landscape is a perpetual challenge for financial institutions. AI provides a dynamic solution by automating the monitoring and adaptation to regulatory changes. Leveraging data analytics to best utilise and parse alternate data sources, such as utility bills, financial account data, etc., can help further track customer behaviour while empowering the team to identify discrepancies and stay compliant. From anti-money laundering (AML) to know-your-customer (KYC) tech, AI ensures that due diligence processes remain effective and consistently aligned with the latest regulatory standards. 


The World Depends on 60-Year-Old Code No One Knows Anymore

The problem is that very few people are interested in learning COBOL these days. Coding it is cumbersome, it reads like an English lesson (too much typing), the coding format is meticulous and inflexible, and it takes far longer to compile than its competitors. And since nobody's learning it anymore, programmers who can work with and maintain all that code are a increasingly hard to find. Many of these "COBOL cowboys" are aging out of the workforce, and replacements are in short supply. ... If it proves successful, the watsonx code assistant could have huge implications for the future, but not everyone is convinced it's a silver bullet that IBM says it is. Many who remember IBM’s previous AI experiment, Watson Health, are hesitant to trust another big AI project from the company because the previous one failed so miserably and didn't deliver on its high-flying promises. Gartner Distinguished Vice President and Analyst, Arun Chandrasekara is also skeptical because “IBM has no case studies, at this time, to validate its claims,” he says. 


Tech Works: How to Build a Life-Long Career in Engineering

As Hightower put it, “You get to move as fast as you’re willing to believe that you can. You identify a problem and you execute it.” Aim to be agile in mindset and practice as long as you can, both with your organization and with your own career. Nothing is precious. Always look for opportunities to learn. If you get stuck with one language or framework, it limits where you can move and also your ability to change. It may even have you wasting time rebuilding things in your framework of choice — like Hightower acknowledged he used to do with Python. ... IT is a massive cost center that often demands an explanation from an organization’s budget makers, especially with a recession looming. An underappreciated benefit of platform engineering, for instance, is that it can enable a conversation between the tech side and the business side. Developers and engineers benefit from this conversation. They feel a deeper sense of purpose when their work is more closely connected to business goals. That means, especially in a time of increased automation, storytelling is of great value. To act as a translator and context giver can help boost an engineer’s value.


Bridging the gap between cloud vs on-premise security

Cloud-native security architectures like SASE and SSE can offer the east-west protection typically delivered by a data center firewall by rerouting all internal traffic through the closest point of presence (PoP). Unlike a local firewall that comes with its own configuration and management constraints, firewall policies configured in the SSE PoP can be managed via the platform’s centralized management console. ... As security functions move increasingly to the cloud, it’s crucial not to lose sight of the controls and security measures needed on-site. Cloud-native protections aim to increase coverage while reducing complexities and boosting convergence. As critical as it is to enable east-west traffic protection within SASE and SSE architectures, it’s equally important to maintain the unified visibility, control, and management offered by such platforms. To achieve this, organizations must avoid getting carried away by emerging threats and adding back disparate security solutions. 


5 tweaks every developer should make in Windows 11

The new Windows Terminal is nothing short of fantastic. It's a night-and-day improvement that allows you to run Powershell, cmd, and WSL sessions within one window. It's customizable, has great tab support, and is even open-source. It's got a similar JSON configuration for settings as VSCode, which is well worth exploring, and the inbuilt GUI menus allow you to set your default shell among a range of other things. The new terminal also supports side-by-side windows or split panes, and background opacity settings. ... While Microsoft has made great strides in recent years trying to win back developers, the Windows file system has often been a pain point. Developers have long been used to a Linux/Unix file system, where managing and creating thousands of small files for dependencies is of trivial impact on the overall system performance, and many common tools have been built with this in mind. NTFS has already been known to suffer from a performance gap with the defacto Linux standard ext4, and Windows Defender's real-time protection can slow this down even further. 


Data Observability: Reliability in the AI Era

Data observability is characterized by the ability to accelerate root cause analysis across data, system, and code and to proactively set data health SLAs across the organization, domain, and data product levels. ... Data engineers are going to be building more pipelines faster (thanks Gen AI!) and tech debt is going to be accumulating right alongside it. That means degraded query, DAG, and dbt model performance. Slow running data pipelines cost more, are less reliable, and deliver poor data consumer experience. That won’t cut it in the AI era when data is needed as soon as possible. Especially not when the economy is forcing everyone to take a judicious approach with expense. That means pipelines need to be optimized and monitored for performance. Data observability has to cater for it. ... This will shock no one who has been in the data engineering or machine learning space for the last few years, but LLMs perform better in areas where the data is well-defined, structured, and accurate. Not to mention, there are few enterprise problems to be solved that don’t require at least some context of the enterprise. 


Top 9 Cybersecurity Trends in 2024

Looking to the future of cybersecurity, companies will need to implement new cyber defenses to combat info stealer malware, he adds. Organizations should seek comprehensive malware remediation strategies to neutralize the stolen data before it’s used for other cyber incidents. “Session cookies, passwords, and APIs can remain active for weeks or months after they were initially stolen, leaving organizations vulnerable to follow-up or repeat attacks using the same data,” according to Hilligoss. “A holistic post-infection remediation plan that includes monitoring the dark web for malware-stolen data allows enterprises to invalidate any compromised sessions and patch vulnerabilities before criminals use the information to cause harm.” ... In addition, the SEC charges against the SolarWinds chief information security officer (CISO) will change that role in 2024, according to Thomas Kinsella, co-founder and chief customer officer at Tines, a security workflow automation company. The SEC’s decision means more cybersecurity issues will escalate to boardroom issues as CISOs force the entire company to accept the risk rather than shouldering it alone.


Turmoil at OpenAI shows we must address whether AI developers can regulate themselves

In the background, there have been reports of vigorous debates within OpenAI regarding AI safety. This not only highlights the complexities of managing a cutting-edge tech company, but also serves as a microcosm for broader debates surrounding the regulation and safe development of AI technologies. Large language models (LLMs) are at the heart of these discussions. LLMs, the technology behind AI chatbots such as ChatGPT, are exposed to vast sets of data that help them improve what they do – a process called training. However, the double-edged nature of this training process raises critical questions about fairness, privacy, and the potential misuse of AI. Training data reflects both the richness and biases of the information available. The biases may reflect unjust social concepts and lead to serious discrimination, the marginalising of vulnerable groups, or the incitement of hatred or violence. Training datasets can be influenced by historical biases. 



Quote for the day:

“The road to success and the road to failure are almost exactly the same.” -- Colin R. Davis