Daily Tech Digest - February 18, 2024

Remote Leadership Strategies for Sustained Engagement

The leaders foresee a future where AI and collaboration technologies continue to reduce the friction of remote working and increase collaboration in the virtual world. “With the release of solutions such as Apple Vision, this will be the start of truly immersive remote leadership and collaboration that is both inclusive and focussed on employee wellbeing,” Boast says. “All this said, I hope we continue to make an effort to meet in person periodically to refresh and renew connections.” For Ratnavira, leaders have a critical role in fostering trust, continuous communication, and feedback, which is key to unlocking the full potential of a remote workforce and building high-performance teams. “A culture-first organization intuitively figures remote work because there is a lot of trust placed in individuals and investment made in their overall growth,” says Sambandam. Remote work models have proven that success can thrive in this transformative approach. “What was once the ‘new normal’ is now etched into the fabric of our operations,” he adds. “This isn’t a temporary shift; it’s a paradigm shift with no point of return.”


The Rise of Small Language Models

Small language models are essentially more streamlined versions of LLMs, in regards to the size of their neural networks, and simpler architectures. Compared to LLMs, SLMs have fewer parameters and don’t need as much data and time to be trained — think minutes or a few hours of training time, versus many hours to even days to train a LLM. Because of their smaller size, SLMs are therefore generally more efficient and more straightforward to implement on-site, or on smaller devices. Moreover, because SLMs can be tailored to more narrow and specific applications, that makes them more practical for companies that require a language model that is trained on more limited datasets, and can be fine-tuned for a particular domain. Additionally, SLMs can be customized to meet an organization’s specific requirements for security and privacy. Thanks to their smaller codebases, the relative simplicity of SLMs also reduces their vulnerability to malicious attacks by minimizing potential surfaces for security breaches. On the flip side, the increased efficiency and agility of SLMs may translate to slightly reduced language processing abilities, depending on the benchmarks the model is being measured against.


Why software 'security debt' is becoming a serious problem for developers

Larger tech enterprises appear to be the most likely to have critical levels of security debt, according to the report, with over three times as many large tech firms found to have critical security debt compared to government organizations. The flaws that make up this debt were found in both the first-party code and third party application code taken from open source libraries, for example. The study found nearly two-thirds (63%) of the applications scanned had flaws in the first-party code, compared to 70% that had flaws in their third-party code. ... Eng’s advice for reducing security debt caused by flaws in first party code is to better integrate security testing into the entire software development lifecycle (SDLC) to ensure devs catch issues earlier in the process. If developers are forced to carry out security testing before they can merge new code into the main repository, this would go a long way in reducing flaws in first party code, Eng argued. But, Eng noted, this is not how the majority of businesses operate their development teams. “The problem is not every company is doing security testing at that level of granularity. 


Mythbust Your Way to Modern Data Management

Enterprises often believe there is one path for data compression. They may think that data compression is done exclusively in software on the host CPU. Because the CPU does the processing, there is the risk of a performance penalty under load, making it a non-starter for critical performance workloads. In the same way, the data pipeline within your organization is unique and tailored to your requirements, and architecting how data flows offers plenty of options. Data compression can be done in many ways, and the outcomes of choosing how and where compression should be processed can lead to benefits that cascade throughout the architecture. ... How can you improve the overall cost of ownership of your infrastructure? How can you increase storage and performance while decreasing power consumption? How can you make the data center more sustainable? When organizations try to solve these sorts of problems, data compression may not immediately leap to mind as the answer. Data compression doesn’t get more attention because organizations simply aren’t thinking about it as a problem-solving tool. This becomes clear when you look at search trends related to data and see that “enterprise data compression” is orders of magnitude lower down the results than something like “data management.”


Want to be a data scientist? Do these 4 things, according to business leaders

"You have to try new tech continuously," he says. "Don't hesitate to use generative AI to help you complete your job. Now, you can write code by saying to a model, 'Okay, write me something that does this.' So, be open -- embrace the tech. I think that's important." Martin says that he's not your typical chief data officer (CDO). Rather than just focusing on leadership concerns, he still gets his hands dirty with code -- and he advises up-and-coming data talent to do the same. "It's important if you want to get ahead that you understand what you're doing and that you're playing with tech," he says. "It gives me an edge, especially in mathematics and data science. I know about statistics, and I can build models myself." ... "While we can talk about math expertise, which is important because you need some level of academic capability, I think more important than that, certainly when I'm recruiting, is that I'm looking for the rounded individual," he says. "The straight A-grade student is great, but that person might not always be the best fit, because they've got to manage their time, they need to interact with the business, and they need to go and talk with stakeholders from across the business."


The best part of working in data and AI is the constant change

AI and analytics is such a vast field today that it gives people the freedom to chart their own course. You can choose to deep dive into an area of data – such as data governance, data management, data privacy, or become a data scientist working with ML models. You can take on the more technical roles of data engineering, data architecture, or take a more holistic advisory role in consulting the client on their end-to-end data and AI strategy. You can choose to work for a consulting firm like Accenture and help solve problems for clients across industries or be part of an organisation’s internal data teams. The field of AI and analytics offers many career paths and is only going to grow as we head towards a future underpinned by data and AI. ... While technical skills underpin many roles in the space and should be developed consistently, logical reasoning, strategic thinking, industry knowledge etc, play an important part as well. My advice is to build a network of mentors and peers who can be your guides in your career journey. The support and wisdom of those who have walked this path before can be invaluable. But, equally, trust your unique perspective and voice. Your diversity of thought is a strength that will set you apart.


A quantum-safe cryptography DNSSEC testbed

In the context of the DNS, DNSSEC may no longer guarantee authentication and integrity when powerful quantum computers become available. For the end user, this means that they can no longer be sure that when they browse to example.nl they will end up at the correct website (spoofing). They may also receive more spam and phishing emails since modern email security protocols rely on DNSSEC as well. Fortunately, cryptographers are working on creating cryptographic algorithms resistant to quantum computer attacks — so-called quantum-safe cryptographic algorithms. However, those quantum-safe algorithms often have very different characteristics than their non-quantum-safe counterparts, such as signature sizes, computation time requirements, memory requirements and, in some cases, key management requirements. As a consequence, those quantum-safe algorithms are not drop-in replacements for today’s algorithms. For DNSSEC, it is already known that there are stringent requirements when it comes to, for example, signature sizes and validation speed. But other factors, such as the size of the zone file, also have implications for the suitability of algorithms.


Someone had to say it: Scientists propose AI apocalypse kill switches

In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit. Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole.


Cloud mastery is a journey

A secure foundation is required for developing an enterprise’s strong digital immunity. This entails various aspects like safeguarding against hackers, disaster recovery strategies, and designing robust systems. Enterprises employ the defense-in-depth approach for protection against hackers. It means that every element of an IT environment should be built robustly and securely. For this, a few practical strategies include employing AI-powered firewalls, System Information Event Management, strong identity authentication, antivirus tools, vulnerability management, and teams of ethical hackers for simulated attacks. The cloud can be a powerful asset for building backup systems and disaster recovery plans. These are critical to combat potential data center failures caused by an event like a storm, fire or electrical outage. Focusing on resilience is equally important and extends beyond robust software. Resiliency means addressing every possible failure and threat in securing and maintaining the availability of systems, data and networks. For example, failures in services like firewalls and content distribution networks might be rare but are plausible. 


It’s Time to End the Myth of Untouchable Mainframe Security.

It is critical for mainframe security to re-enter the cybersecurity conversation, and that starts with doing away with commonly held misconceptions. First is the mistaken belief that due to their mature or streamlined architecture with fewer vulnerabilities, mainframes are virtually impervious to hackers. There is the misconception that they exist in isolation within the enterprise IT framework, disconnected from the external world where genuine threats lurk. And then there’s the age factor. People newer to the profession have relatively little experience with mainframe systems when compared to their more experienced counterparts and will tend to not question their viewpoints or approaches of their leaders or senior team members. This state of affairs can’t continue. In the contemporary landscape, modern mainframes are routinely accessed by employees and are intricately linked to applications that encompass a wide array of functions, ranging from processing e-commerce transactions to facilitating personal banking services. The implications of a breach can’t be overstated. 



Quote for the day:

"When you do what you fear most, then you can do anything." -- Stephen Richards

Daily Tech Digest - February 17, 2024

Europe’s Digital Services Act applies in full from tomorrow - here’s what you need to know

In one early sign of potentially interesting times ahead, Ireland’s Coimisiún na Meán has recently been consulting on rules for video sharing platforms that could force them to switch off profiling-based content feeds by default in that local market. In that case the policy proposal was being made under EU audio visual rules, not the DSA, but given how many major platforms are located in Ireland the Coimisiún na Meán, as DSC, could spin up some interesting regulatory experiments if it take a similar approach when it comes to applying the DSA on the likes of Meta, TikTok, X and other tech giants. Another interesting question is how the DSA might be applied to fast-scaling generative AI tools. The viral rise of AI chatbots like OpenAI’s ChatGPT occurred after EU lawmakers had drafted and agreed the DSA. But the intent for the regulation was for it to be futureproofed and able to apply to new types of platforms and services as they arise. Asked about this, a Commission official said they have identified two different situations vis-à-vis generative AI tools: One where a VLOP is embedding this type of AI into an in-scope platform  — where they said the DSA does already apply. 


Composable Architectures vs. Microservices: Which Is Best?

Composable architecture is a modular approach to software design and development that builds flexible, reusable and adaptable software architecture. It entails breaking down extensive, monolithic platforms into small, specialized, reusable and independent components. This architectural pattern comprises a pluggable array of modular components, such as microservices, packaged business capability (PBC), headless architecture and API-first development that can be seamlessly replaced, assembled and configured to align with business requirements. In a composable application, each component is developed independently using the technologies best suited to the application’s functions and purpose. This enables businesses to build customized solutions that can swiftly adapt to business needs. ... The composable approach has gained significant popularity in e-commerce applications and web development for enhancing the digital experience for developers, customers and retailers, with industry leaders like Shopify and Amazon taking advantage of its benefits.


Nginx core developer quits project in security dispute, starts “freenginx” fork

Comments on Hacker News, including one by a purported employee of F5, suggest Dounin opposed the assigning of published CVEs to bugs in aspects of QUIC. While QUIC is not enabled in the most default Nginx setup, it is included in the application's "mainline" version, which, according to the Nginx documentation, contains "the latest features and bug fixes and is always up to date." ... MZMegaZone confirmed the relationship between security disclosures and Dounin's departure. "All I know is he objected to our decision to assign CVEs, was not happy that we did, and the timing does not appear coincidental," MZMegaZone wrote on Hacker News. He later added, "I don't think having the CVEs should reflect poorly on NGINX or Maxim. I'm sorry he feels the way he does, but I hold no ill will toward him and wish him success, seriously." Dounin, reached by email, pointed to his mailing list responses for clarification. He added, "Essentially, F5 ignored both the project policy and joint developers' position, without any discussion." MegaZone wrote to Ars (noting that he only spoke for himself and not F5), stating, "It's an unfortunate situation, but I think we did the right thing for the users in assigning CVEs and following public disclosure practices. 


Bridging Silos and Overcoming Collaboration Antipatterns in Multidisciplinary Organizations

Some problems with these anti-patterns. I'm going to talk again in threes, I've talked about three anti-patterns, one role across many teams, product versus engineering wars, and X-led. I'm going to talk about some of the problems with these. The first one is one group holds the power. One group holds all the decision-making power, and others can't properly contribute. They aren't given the opportunity to contribute. In our first example, Anita the designer doesn't hold any power because all she's doing is playing catch-up. She's got no time to really contribute to decisions. In the second anti-pattern in the product versus engineering, there's always a battle between who holds the power. It's not collaborative, there's silos between the two. ... Professional protectionism is about people protecting their professional boundaries and not letting other people step into them. It's like, "No, this is my area, you stay over there and you do your thing and I'll do my thing over here." Maybe some people have experienced this. For example, I was working with an organization recently and they said the user research team didn't want to publish how they did user research, because other people might do it.


Scalability Challenges in Microservices Architecture: A DevOps Perspective

Although microservices architectures naturally lend themselves to scalability, challenges remain as systems grow in size and complexity. Efficiently managing how services discover each other and distribute loads becomes complex as the number of microservices increases. Communication across complex systems also introduces a degree of latency, especially with increased traffic, and leads to an increased attack surface, raising security concerns. Microservices architectures also tend to be more expensive to implement than monolithic architectures. Creating secure, robust, and well-performing microservices architectures begins with design. Domain-driven design plays a vital role in developing services that are cohesive, loosely coupled, and aligned with business capabilities. Within a genuinely scalable architecture, every service can be deployed, scaled, and updated autonomously without affecting the others. One essential aspect of effectively managing microservices architecture involves adopting a decentralized governance model, in which each microservice has a dedicated team in charge of making decisions related to the service


CQRS Pattern in C# and Clean Architecture – A Simplified Beginner’s Guide

When implementing Clean Architecture in C#, it’s important to recognize the role each of the four components plays. Entities and Use Cases represent the application’s core business logic, Interface Adapters manage the communication between the Use Cases and Infrastructure components, and Infrastructure represents the outermost layer of the architecture. To implement Clean Architecture successfully, we have some best practices to keep in mind. For instance, Entities and Use Cases should be agnostic to the infrastructure and use plain C# classes, providing a decoupled architecture that avoids excess maintenance. Additionally, applying the SOLID principles ensures that the code is flexible and easily extensible. Lastly, implementing use cases asynchronously can help guarantee better scalability. Each component of Clean Architecture has a specific role to play in the implementation of the overall architecture. Entities represent the business objects, Use Cases implement the business logic, Interface Adapters handle interface translations, and Infrastructure manages the communication to the outside world. 


AI in practice - Celonis’ VP shares how AI can support system & process change

Brown said that, at the moment, Celonis is seeing AI being used to expedite often tedious work, or work that often is prone to human error. Looking back at the adoption of previous general purpose technologies, this makes sense. More often than not the tools that are adopted early on are applied to use cases that take time, don’t add a significant amount of value, and where mistakes are easily made by people. ... Brown also had some thoughts regarding how enterprises should consider their approach to AI adoption, with a focus on not isolating people away from the technology - keeping them close to the change and bringing them along on the journey. Firstly, Brown acknowledged that this is going to be challenging, given the tendency for employees to ‘build empires’ within enterprises and protect them at all costs. She said: I'll go back to a phrase I used for a long, long time and I still use: people don't hurt what they own. So if I'm invested in it, and it's part of what I care about, I'm going to protect it and grow it. If I boil down change management into one sentence, it’s about expectations and accountability. So, what can I expect to be different and what do I need to do differently?


Open Agile Architecture: A Comprehensive Guide for Enterprise Architecture Professionals

Open Agile Architecture equips you with a methodology that seamlessly integrates Agile principles into the realm of enterprise architecture. In today's business environment, change is constant. Open Agile Architecture allows you to respond swiftly and effectively to evolving business needs, technological advancements, and market dynamics. ... Collaboration is at the heart of Agile methodologies, and Open Agile Architecture extends this principle to enterprise architecture. By promoting cross-functional collaboration and open communication, the methodology breaks down silos within the organization. As a practitioner, you'll experience improved collaboration between business and IT teams, fostering a shared understanding of goals and priorities. ... Open Agile Architecture emphasizes an iterative and incremental approach to development. This means that instead of long, rigid planning cycles, you work on delivering incremental value in shorter iterations. This not only ensures continuous progress but also allows you to demonstrate tangible outcomes to stakeholders regularly.


Microsoft Copilot is preparing advances in data protection

As the company has revealed through the Bing blog, Copilot is being prepared to maximize the protection of user and company data that use this system. With this, Microsoft wants to make it clear that the company’s priority is to show that it has no interest in user data while using Copilot services in its 365 versions. It is evident that Copilot is becoming a fundamental piece of Microsoft’s brand strategy, and precisely for that reason they want to distance it from some of the main stigmas that AI currently has. For its part, Copilot is already deeply integrated into various Microsoft services, such as Bing or Teams, where it offers considerable support to the user. One of the concerns that many users have when using Artificial Intelligence is the mere fact of being part of the learning and training process by the AI. As these tools are constantly evolving, many of these systems used the users’ own usage to create variations and advancements in various subjects. However, over time, it has been shown that, in many cases, this has ended up “dumbing down” the AI. However, many users find it quite ironic that an AI, which is trained precisely by collecting data massively illicitly through the Internet, has to actively demonstrate a system that ensures that Copilot will not use user data to continue improving.


Artificial intelligence needs to be trained on culturally diverse datasets to avoid bias

Culture plays a significant role in shaping our communication styles and worldviews. Just like cross-cultural human interactions can lead to miscommunications, users from diverse cultures that are interacting with conversational AI tools may feel misunderstood and experience them as less useful. To be better understood by AI tools, users may adapt their communication styles in a manner similar to how people learned to “Americanize” their foreign accents in order to operate personal assistants like Siri and Alexa. ... AI is already in use as the backbone of various applications that make decisions affecting people’s lives, such as resume filtering, rental applications and social benefits applications. For years, AI researchers have been warning that these models learn not only “good” statistical associations — such as considering experience as a desired property for a job candidate — but also “bad” statistical associations, such as considering women as less qualified for tech positions. As LLMs are increasingly used for automating such processes, one can imagine that the North American bias learned by these models can result in discrimination against people from diverse cultures.



Quote for the day:

''Failure will never overtake me if my de determination to succeed is strong enough.'' -- Og Mandino

Daily Tech Digest - February 16, 2024

GitHub: AI helps developers write safer code, but you need to get the basics right

With cybercriminals largely sticking to the same tactics, it is critical that security starts with the developer. "You can buy tools to prevent and detect vulnerabilities, but the first thing you need to do is help developers ensure they're building secure applications," Hanley said in a video interview with ZDNET. As major software tools, including those that power video-conferencing calls and autonomous cars, are built and their libraries made available on GitHub, if the accounts of people maintaining these applications are not properly secured, malicious hackers can take over these accounts and compromise a library. The damage can be wide-reaching and lead to another third-party breach, such as the likes of SolarWinds and Log4j, he noted. Hanley joined GitHub in 2021, taking on the newly created role of CSO as news of the colossal SolarWinds attack spread. "We still tell people to turn on 2FA...getting the basics is a priority," he said. He pointed to GitHub's efforts to mandate the use of 2FA for all users, which is a process that has been in the works during the last one and a half years and will be completed early this year. 


Why Tomago Aluminium reversed course on its cloud journey

“An ERP solution like ours is massive,” he says, highlighting that this can make it difficult to keep track of everything you are, and not, using. For instance, he says if you’re getting charged $20,000 for electricity, you might want to check your meter and verify that your usage and bill align. “If your electricity meter is locked away and you just get a piece of paper at the end of the month telling you everything’s fine and you owe $20 000, you’re probably going to ask some questions,” he says. Tomago was told everything was secure and running as it should, but they had no way to verify what they were being told was accurate. “We essentially had a swarm of big black boxes,” he says. “We put dollars in and got services out, but couldn’t say to the board, with confidence, that we were really in control of things like compliance, security, and due diligence.” Then in 2020, Tomago moved its ERP system back on-prem — a decision that’s paying dividends. “We now know what our position is from a cyber perspective because we know exactly what our growth rates are, and we know that our systems are up-to-date, and what our cost is because it’s the same every month,” he says.


OpenAI and Microsoft Terminate State-Backed Hacker Accounts

Threat actors linked to Iran and North Korea also used GPT-4, OpenAI said. Nation-state hackers primarily used the chatbot to query open-source information, such as satellite communication protocols, and to translate content into victims' local languages, find coding errors and run basic coding tasks. "The identified OpenAI accounts associated with these actors were terminated," OpenAI said. It conducted the operation in collaboration with Microsoft. Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors' usage of AI," the Redmond, Washington-based technology giant said. Microsoft's relationship with OpenAI is under scrutiny by multiple national antitrust authorities. A British government study published earlier this month concluded that large language models may boost the capabilities of novice hackers but so far are of little use to advanced threat actors. China-affiliated Charcoal Typhoon used ChatGPT to research companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns. 


Why Most Founders and Investors Are Wrong About Disruption

Recognizing disruption requires an open mind. In many instances, people can't believe or see something is disruptive at first. They think the idea is foolish or won't work. Disruption is usually caused by something that hasn't existed before or something new. Airbnb is a great example here as well. Its founders are said to have gone to every venture capitalist in Silicon Valley and were famously laughed out of meetings. People couldn't see what they saw — it hadn't been invented yet. Even the most seasoned business leaders can misunderstand and mistake disruption or fail to recognize it. Disruption doesn't always mean extinction. History has proven this for countless companies, processes, products, services, and ideas. Organizations can collapse after big changes. They did not or could not adapt. But something new or different tends to fill in the gap. It's often better, and the cycle continues. I have been on both sides of disruption at my company, BriteCo. We are one of the jewelry industry's disruptors – we were the first to move jewelry consumers to 100% paperless processes with technology and the internet. We also provide our customers with different ways to buy our coverage, unique to BriteCo, versus an outdated analog process at the retail point of sale.


Will generative AI kill KYC authentication?

Lee Mallon, the chief technology officer at AI vendor Humanity.run, sees an LLM cybersecurity threat that goes way beyond quickly making false documents. He worries that thieves could use LLMs to create deep back stories for their frauds in case someone at a bank or government level reviews social media posts and websites to see if a person truly exists. “Could social media platforms be getting seeded right now with AI-generated life histories and images, laying the groundwork for elaborate KYC frauds years down the line? A fraudster could feasibly build a ‘credible’ online history, complete with realistic photos and life events, to bypass traditional KYC checks. The data, though artificially generated, would seem perfectly plausible to anyone conducting a cursory social media background check,” Mallon says. “This isn’t a scheme that requires a quick payoff. By slowly drip-feeding artificial data onto social media platforms over a period of years, a fraudster could create a persona that withstands even the most thorough scrutiny. By the time they decide to use this fabricated identity for financial gains, tracking the origins of the fraud becomes an immensely complex task.”


Generative AI: Shaping a New Future for Fraud Prevention

A new category called "AI Risk Decisioning" is poised to transform the landscape of fraud detection. It leverages the strengths of generative AI, combining them with traditional machine learning techniques to create a robust foundation for safeguarding online transactions. ... The first pillar involves creating a comprehensive knowledge fabric that serves as the foundation for the entire platform. This fabric integrates various internal data sources unique to the company, such as transaction records and real-time customer profiles. ... The third pillar of the AI Risk Decisioning approach focuses on automatic recommendations, offering powerful capabilities for real-time and effective risk management. It can automatically monitor transactions and identify trends or anomalies, suggest relevant features for risk models, conduct scenario analyses independently, and recommend the next best action to optimize performance. ... The fourth pillar of the AI Risk Decisioning approach emphasizes human-understandable reasoning. This pillar aims to make every decision, recommendation, or insight provided by the AI system easily understandable to human users.


Implementing a Digital Transformation Strategy

Actionable intelligence has been accepted as the “new normal” of the data-first enterprise. In the data-first enterprise, data and digital technologies not only open up innovative revenue channels but also create the most compliant (governed) business operations. However, in order for an enterprise to successfully plan, develop, and execute a data-first operating model, the business owners and operators have to first develop a digital transformation strategy – connecting the data piles, digital technologies, business processes, and marketing staff. The digital transformation strategy develops around the need to bridge the gaps between the current data-driven goals and processes and intended future business goals and processes. In a nutshell, the digital transformation strategy strikes a harmonious balance between traditional IT and marketing functions. Global businesses have witnessed firsthand the immense benefits of digital processes, such as improved efficiencies, reduced operating costs, and growth of additional revenue channels. A recent industry survey report indicated that 92% of businesses are already pursuing digital transformation in more than one way. However, the transformation across businesses is at various stages of maturity.


Planning a data lake? Prepare for these 7 challenges

Storing data in a central location simplifies compliance in the sense that you know where your data resides, though it also creates compliance challenges. If you store many different types of data in your lake, different assets may be subject to different compliance standards. Data that contains personally identical information (PII), for instance, must be managed differently in some ways than other types of data to comply with laws like DPA, GDPR or HIPAA. While a data lake won’t prevent you from applying granular security controls to different data assets, it doesn't make it easier, either – and it can make it more difficult if your security and compliance tools are not capable of applying different policies to different data assets within a centralized repository. ... Placing your data into a central location to create a data lake is one thing but connecting it to various applications and the workforce who needs access is another. Until you develop the necessary data integrations – and unless you keep them up to date – your data lake will deliver little value. Building data integrations takes time, effort, and expertise and users sometimes underestimate how difficult it is to create successful data integrations. Be sure and prioritize data integration strategy as part of your overall process.


Does Cloud Native Change Developer Productivity and Experience?

When management focuses too much on developer productivity, developer experience can suffer and thus hurt morale and, paradoxically, productivity as well. It’s important for management to have a light touch to avoid this problem, especially with cloud native. Cloud native environments can become so dynamic and noisy that both productivity and developer experience can decline. Management must take special care to support its developers with the right platforms, tools, processes and productivity metrics to facilitate the best outcomes, leveraging platform engineering to create and manage IDPs that facilitate cloud native development despite its inherent complexity. After all, the complexity of cloud native development alone isn’t the problem. Complexity presents challenges to be sure, but developers are always up for a challenge. Complexity coupled with a lack of visibility brings frustration, lowering productivity and DX. With the right observability, for example, with Chronosphere and Google Cloud, developers have a good shot at untangling cloud native’s inherent complexity, delivering quality software on time and on budget, while maintaining both productivity and DX.


Vulnerability to Resilience: Vision for Cloud Security

In the recent era of cloud-native development and DevSecOps, CISOs face the challenge of fostering a security-conscious culture that spans across various cross-functional teams. However, by adopting deliberate, disruptive, engaging, and enjoyable approaches that also provide a return on investment, a sustainable security culture can be achieved. It is essential to instill the concept of shared responsibility for security and focus on enhancing awareness and adhering to advanced security practices. If you don't already have a secure development lifecycle, it is imperative to integrate one immediately. Recognizing and rewarding individuals who prioritize security is one of the ways to encourage a security-focused culture. Additionally, creating a security community and making security more engaging and enjoyable can also help cultivate a sustainable security culture. CISOs should leverage technical tools and best practices to facilitate the seamless integration of security into the Continuous Integration/Continuous Deployment (CI/CD) pipeline. This can be achieved through various measures, such as conducting threat modeling, adopting a shift-left security approach, incorporating IDE security ...



Quote for the day:

"You may have to fight a battle more than once to win it." -- Margaret Thatcher

Daily Tech Digest - February 15, 2024

CISO and CIO Convergence: Ready or Not, Here It Comes

While CIOs are still responsible for setting and meeting technology goals and for staying on budget, their primary mandate is determining how the company can harness technology to innovate, and then procure and manage those resources. While plenty of companies still maintain large, on-premise IT estate, it's just a matter of time before they digitally transform. Either way, the CIO role has become markedly less operational over time. On the other hand, the profile of CISOs has been growing since the early 2000s, set against a non-stop carousel of compliance mandates, data breaches, and emerging cybersecurity threats. While data breaches may have forced businesses to pay attention to security, it was compliance mandates that funded it. From HIPAA and PCI DSS to GDPR, SOC 2, and more, compliance has been a double-edged sword for CISOs. Compliance increased the role of cybersecurity teams and made them more visible across IT and the business as a whole, providing CISOs with bigger budgets and increased latitude on how to spend it. However, all the effort they put into compliance did little to stymie phishing, ransomware, big breaches, and/or malicious insiders. 


Will Generative AI Kill DevSecOps?

Beyond having automation and guardrails in place, you also need security policies at the company level, Moisset said, to make sure that DevSecOps understands all the generative AI tools colleagues are using. Then you can educate them on how to use it, like creating and communicating a generative AI policy. Because a total ban on GenAI just won’t fly. When Italy temporarily banned ChatGPT, Foxwell said there was a visible decrease in productivity across the country’s GitHub organizations, but, when it was reinstated, “what also picked up was the usage of tools that circumvented all of the government policies and firewalls around the prevention of using these” tools. Engineers always find a way. Particularly when using generative AI for customer service chatbots, Moisset said, you need guardrails in place around both the inputs and outputs, as malicious actors can potentially “socialize” the chatbot via prompt injection to give a desired result — like when someone was able to buy a Chevy for $1 from a chatbot. “It’s back to educating the users and developers that it’s good to use AI, we should be using AI, but we need to actually put guardrails around it,” she said, which also demands an understanding of how your customers interact with GenAI.


Combining heat and compute

Data centers offer a predictable supply of heat because they keep their servers running continuously. But the heat is “low-grade:” It is warm rather than hot, and it comes in the form of air, which is difficult to transport. So, most data centers vent their heat to the atmosphere. Sometimes, there are district heat networks, which provide warmth to local homes and businesses through a piped network. If your data center is near one of these, it is a matter of extending it to connect to the data center, and boosting the grade of heat. But you have to be in the right place to connect to one. “There are certain countries that have established or developing heat networks, but the majority don't have a heat network per se, so it's going on a piecemeal basis,” Neal Kalita, senior director of power and energy at NTT, tells DCD. You are unlikely to find one in the US, says Rolf Brink of cooling consultancy Promersion: “The United States is a fundamentally different ecosystem. But Europe is a lot more dense in terms of population, and there is more heat demand.” The Nordic countries have a lot of heat networks. Stockholm Data Parks is a well-known example - a data center campus in urban Stockholm, where every data center has a connection to the district heating network and gets paid for its heat.


Harmonizing human potential and AI: The evolution of work in the digital era

The evolving landscape of work is witnessing a profound transformation as the fusion of human potential with AI takes center stage. Concerns about the ethical implications of AI are well-known, including the potential for perpetuating bias and discrimination and its impact on employment and job security. Ensuring that AI is developed and deployed ethically and responsibly is crucial, taking into account fairness, transparency and accountability. ... Optimizing human-centric capabilities with automation and an AI-first mindset is significant for long-term success. Consider a telecoms operator with employees struggling to grapple with the labor-intensive process of manually reviewing a high volume of mobile tower lease contracts. By embracing an AI-powered platform equipped with capabilities for faster and more accurate extraction of contract clauses, employees were able to shift their focus toward leveraging hidden risks identified by the platform. This enabled the renegotiation of existing contracts, leading to millions of dollars in savings. It’s no coincidence that the enterprises that are more inclined to augment human potential are those resilient enough to maximize the value of AI-led transformations. 


5 Wi-Fi vulnerabilities you need to know about

Like wired networks, Wi-Fi is susceptible to Denial of Service (DoS) attacks, which can overwhelm a Wi-Fi network with excessive amount of traffic. This can cause the Wi-Fi to become slow or unavailable, disrupting normal operations of the network, or even the business. A DoS attack can be launched by generating a large number of connection or authentication requests, or injecting the network with other bogus data to break the Wi-Fi. ... Wi-jacking occurs when a Wi-Fi-connected device has been accessed or taken over by an attacker. The attacker could retrieve saved Wi-Fi passwords or network authentication credentials on the computer or device. Then they could also install malware, spyware, or other software on the device. They could also manipulate the device’s settings, including the Wi-Fi configuration, to make the device connect to rogue APs. ... RF interference can cause Wi-Fi disruptions. Instead of being caused by bad actors, RF interference could be triggered by poor network design, building changes, or other electronics emitting or leaking into the RF space. Interference can result in degraded performance, reduced throughput, and increased latency.


AI outsourcing: A strategic guide to managing third-party risks

Bias may persist in many face detection systems. Naturally, this misidentification could have severe consequences for the parties involved. Diverse training data and transparent algorithms are necessary to mitigate the risk of discriminatory outcomes. Furthermore, complex AI models often encounter the “black box” problem or how some AI models arrive at their decisions. Teaming with a third-party AI service requires human oversight to navigate the threat of biased algorithms. ... Most of us can admit that the risk of becoming overly reliant on AI is significant. AI can quickly become a go-to solution for many challenges. It’s no surprise that companies face a similar risk, becoming too dependent on a single vendor’s AI solutions. However, this approach can become problematic. Companies can “get stuck,” and switching providers seems almost impossible. ... Quality and reliability concerns are top-of-mind for most company leaders partnering with third-party AI services. Some primary concerns include service outages, performance issues, and unexpected disruptions. Operational resilience is necessary, and contingency plans are a significant piece of the resiliency puzzle, given the damage business downtime can cause. 


Practices for Implementing an Effective Data Governance Strategy

Ensuring the integrity and usability of data within an organization requires implementing clear data quality standards and metrics. These standards serve as a benchmark for data quality, guiding data management practices and ensuring that data is accurate, complete, and reliable. Organizations can streamline their data governance processes by defining what constitutes quality data, making it easier to identify and rectify data issues. This approach enhances data quality, supports compliance with regulatory requirements, and improves decision-making capabilities. Developing a comprehensive set of data quality metrics is crucial for monitoring and maintaining high data standards. These metrics should be aligned with the organization’s strategic objectives and include criteria such as accuracy, completeness, consistency, timeliness, and uniqueness. ... Creating an environment where data stewardship and accountability are at the forefront requires strategic planning and commitment from all levels of an organization. It is essential to embed data governance principles into the corporate culture, ensuring that every team member understands their role in maintaining data integrity and security.


What is the impact of AI on storage and compliance?

Right now, when you look at traditional storage, generally speaking you look at your environment, your ecosystem, your data, classifying that data, and putting a value on it. And, depending on that value and the potential impact, you put in the right security and assign the length of time you need to keep the data and how you keep it, delete it. But, if you look at a CRM [customer relationship management service], if you put the wrong data in then the wrong data comes out, and it’s one set of data. So, to be blunt, garbage in, garbage out. With AI, it’s much more complex than that, so you may have garbage in, but instead of one dataset out that might be garbage, there might be a lot of different datasets and they may or may not be accurate. If you look at ChatGPT, it’s a little bit like a narcissist. It’s never wrong and if you give it some information and then it spits out the wrong information and then you say, “No, that’s not accurate”, it will tell you that’s because you didn’t give it the right dataset. And then at some stage it will stop talking to you, because it will have used up all its capability to argue with you, so to speak. From a compliance perspective, if you are using AI – a complicated AI or a simple AI like ChatGPT – to create a marketing document, that’s OK.


How to Get Your Failing Data Governance Initiatives Back on Track

Data governance is a big lift. Organizations might make the mistake of attempting to roll the initiative out across the entire enterprise without building in the steps to get there. “If you make it too broad and end up not focusing on short-term goals that you can demonstrate to keep the funding going, these engagements [tend] to fail,” says Prasad. Organizational issues are some of the major stumbling blocks standing in the way of successful data governance, but there can also be technical obstacles. Reiter points to the importance of leveraging automation. If an enterprise team attempts to manually undertake data governance mapping, it could be irrelevant by the time it is completed. ... Documentation, or lack thereof, can be a good indicator of a data governance initiatives' progress and sustainability. “As things are changing over time and documentation isn’t updated, that's a great sign that governance is not maintainable,” Holiat says. Getting feedback from end users can alert data governance leaders to issues standing in the way of adoption. Are people throughout the organization frustrated with the data governance program? Does it facilitate their access to data, or is it making their jobs more difficult?


Adopting AI with Eyes Wide Open

For businesses in general, AI can increase efficiency, make the workplace safer, improve customer service, create competitive advantage and lead to new business models and revenue streams. But like any technological innovation, AI has its risks and challenges. At the heart of AI is code and data; code can (and often does) contain bugs, and data can (and often does) contain anomalies. But that is no different to the technological innovations that we have embraced to-date. Arguably, the risks and challenges of AI are greater – not least of all because of the potential breadth of its application – and they include (but are certainly not limited to): overreliance, lack of transparency, ethical concerns, security, and regulatory and statutory challenges which typically lag behind the pace of progress. So, what does have this to do with strategy and architecture, and in particular digital transformation? Too often in organizations, new technologies are rushed in, in the belief that there is no time to lose. Before you know it, the funds and resources have been found to embark on an initiative (programme or project) to adopt it, spearheading the way to the future. It is the future! 



Quote for the day:

"I find that the harder I work, the more luck I seem to have." -- Thomas Jefferson

Daily Tech Digest - February 14, 2024

How AI is strengthening XDR to consolidate tech stacks

XDR platforms need AI/ML technologies to identify malware-free breach attempts while also looking for signals of attackers relying on legitimate system tools and living-off-the-land (LOTL) techniques to breach endpoints undetected. ... VentureBeat spoke with several CEOs at RSAC 2023 to learn how each perceives the value of AI in their product strategies today and in the future. Connie Stack, CEO of NextDLP, told VentureBeat, “AI and machine learning can significantly enhance data loss prevention by adding intelligence and automation to detecting and preventing data loss. AI and machine learning algorithms can analyze patterns in data and detect anomalies that may indicate a security breach or unauthorized access to sensitive information well before any policy violation occurs.” XDR providers tell VentureBeat that the challenge of parsing an exponential increase in telemetry data, performing telemetry enrichment and mapping data to schema are the immediate architectural requirements they have. There’s also the need for real-time cross-collaboration, analytics and alert prioritization. XDR’s current and future ecosystem is dependent on AI’s continued growth.


10 ways generative AI will transform software development

The ability to prompt for code adds risks if the code generated has security issues, defects, or introduces performance issues. The hope is that if coding is easier and faster, developers will have more time, responsibility, and better tools for validating the code before it gets embedded in applications. But will that happen? “As developers adopt AI for productivity benefits, there’s a required responsibility to gut-check what it produces,” says Peter McKee, head of developer relations at Sonar. “Clean as you code ensures that by performing checks and continuous monitoring during the delivery process, developers can spend more time on new tasks rather than remediating bugs in human-created or AI-generated code.” CIOs and CISOs will expect developers to perform more code validation, especially if AI-generated code introduces significant vulnerabilities. ... Another implication of code developed with genAI concerns how enterprise leaders develop policies and monitor the supply chain of what code is embedded in enterprise applications. Until now, organizations were most concerned about tracking open source and commercial software components, but genAI adds new dimensions.


Agile Methodologies In The Era Of Machine Learning Development

Both emphasize adaptability and continuous improvement, providing a solid foundation for building robust ML models. The iterative cycles of Agile resonate with the constant refinement required in ML algorithms, fostering an environment conducive to experimentation and learning. Bringing together Agile and Machine Learning (ML) is like mixing the best of teamwork and smart strategies for computer programs. Agile is like a way of working that’s flexible and can adapt quickly, and ML is all about smart machines learning from data. When they come together, it’s like using a super-smart and flexible approach to make really cool and smart computer programs ... Everyone has a special skill, like some friends are good at building, and others are good at deciding what the robot dog should do.This teamwork also helps if you discover something new, like a better way for the robot dog to move. Agile allows you to quickly change and improve, just like trying a new game. ... Unlike traditional software, ML projects grapple with inherent uncertainties in data and model outcomes, requiring a more adaptive approach. Navigating these uncertainties is paramount when incorporating Agile principles.


The AI data-poisoning cat-and-mouse game — this time, IT will win

The offensive technique works in one of two ways. One, it tries to target a specific company by making educated guesses about the kind of sites and material they would want to train their LLMs with. The attackers then target, not that specific company, but the many places where it is likely to go for training. If the target is, let’s say Nike or Adidas, the attackers might try and poison the databases at various university sports departments with high-profile sports teams. If the target were Citi or Chase, the bad guys might target databases at key Federal Reserve sites. The problem is that both ends of that attack plan could easily be thwarted. The university sites might detect and block the manipulation efforts. To make the attack work, the inserted data would likely have to include malware executables, which are relatively easy to detect. Even if the bad actors’ goal was to simply feed incorrect data into the target systems — which would, in theory, make their analysis flawed — most LLM training absorbs such a massively large number of datasets that the attack is unlikely to work well.


What Is API Sprawl and Why Is It Important?

Inconsistencies between APIs can stunt the developer experience around integration. For example, many different design paradigms are used in modern API development, including SOAP, REST, gRPC and more asynchronous formats like webhooks or Kafka streams. An organization might adopt various styles simultaneously. Using various API styles provides best-of-breed options for the task at hand. That said, style inconsistencies can make it challenging for a single developer to navigate disparate components without guidance. ... As cybersecurity experts often say, you can’t secure what you don’t know. Amid technology sprawl, you likely won’t be aware of the hundreds, if not thousands, of APIs being developed and consumed daily. Without inventory management, APIs can slip under the rug and rot. API sprawl can also lead to insecure coding practices. Security researchers at Escape recently found 18,000 high-risk API-related secrets and tokens after performing a scan of the web. ... Life cycle management can also suffer with sprawl. If API versioning and retirement schedules aren’t communicated effectively, it can easily lead to breaking changes on the client side. 


Rise in cyberwarfare tactics fueled by geopolitical tensions

There are a number of ways in which public-private partnerships can be effective in addressing cybersecurity threats. First, governments and private companies can share information about cyber threats and vulnerabilities. This can help to improve the overall security posture of both the public and private sectors. Second, governments and private companies can develop joint cybersecurity initiatives. These initiatives can focus on a variety of areas, such as developing new security technologies, improving incident response capabilities, or providing cybersecurity training to employees. Third, governments and private companies can collaborate on research and development efforts. This can help to identify new cybersecurity threats and develop new ways to protect against them. Caveat, when talking about public-private partnerships – what is needed is real operational and ongoing public-private collaboration is essential for sharing information, developing best practices, and mitigating risks and is essential for building a more secure and resilient cyber ecosystem. 


New media could bring fresh competition to tape archive market

Glass is becoming another alternative to tape. Microsoft's Project Silica uses femtosecond lasers to write data to quartz glass and "polarization-sensitive microscopy using regular light to read," according to Microsoft. Another company, Cerabyte, uses lasers to etch patterns into ceramic nanocoatings on glass. Ceramic is resistant to heat, moisture, corrosion, UV light, radiation and electromagnetic pulse blasts. Ceramic also has another advantage over tape: Its high durability leads to fewer refresh cycles, according to Martin Kunze, chief marketing officer and co-founder of Cerabyte, a startup headquartered in Munich. "Tape has limited durability and needs to be either refreshed or all migrated onto new formats," Kunze said. This undertaking is expensive and time-consuming, he said. Kunze added that tape is vulnerable to vertical market failure. Western Digital is the only company manufacturing the reading and writing heads for tape. "Assume there is a decision on the board: 'We don't [want to] run this company anymore because it doesn't bring in as much revenue,'" he said. The single point of failure could leave enterprises in the lurch. He sees another problem with tape -- it's stodgy.


Apache Pekko: Simplifying Concurrent Development With the Actor Model

In the actor model, actors communicate by sending messages to each other, without transferring the thread of execution. This non-blocking communication enables actors to accomplish more in the same amount of time compared to traditional method calls. Actors behave similarly to objects in that they react to messages and return execution when they finish processing the current message. Upon reception of a message an Actor can do the following three fundamental actions: send a finite number of messages to Actors it knows; create a finite number of new Actors; and designate the behavior to be applied to the next message. ... Pekko is designed as a modular application and encompasses different modules to provide extensibility. The main components are: Pekko Persistence enables actors to persist events for recovery on failure or during migration within a cluster that provides abstractions for developing event-sourced applications; Pekko Streams module provides a solution for stream processing, incorporating back-pressure handling seamlessly and ensuring interoperability with other Reactive Streams implementations...


How Can Synthetic Data Impact Data Privacy in the New World of AI

Data from the real world is often inherently biased. This is because the data used to train models is largely gathered from across the internet, reflecting biases present in society and the socio-economic groups prevalent in the social media spaces used to gather this data. Data scientists have turned to synthetic data and ‘Digital Humans’ to combat these biases. With Digital Humans, data scientists can vary elements of ‘Digital DNA,’ such as et,’ city, size, and clothing, and mix with real-world data to create more representative and diverse datasets. Of course, this also protects image rights and PII exposure that could come from using images and footage of people in the real world. Mindtech worked with a construction company that wanted to develop autonomous site vehicles. The company wanted to enhance these vehicles’ safety and accrue a broader range of data to train them. As a result, it used synthetic data to create diverse synthetic datasets to train these vehicles to identify various people on site, no matter size/shape/sex/ethnicity/clothing/ – the vehicles could stop their journey if someone were blocking their way.


The Great Superapp Dilemma: Business Ambitions vs User Privacy

If we put privacy aside for a moment, the benefits of a possible superapp cannot be denied. We could say goodbye to the hundreds of online accounts that operate as an isolated silo managed by unrelated services and domains and the chore of updating account details across them all, one by one. And, as well as promising a much simpler user experience through a single application, it would unlock new convenient services using a broader set of data, and allow for increased innovation that adds value for users – such as unified health metrics, consolidated banking services, cohesive government-related accounts, integrated social networks, or unified marketplaces. However, managing vast volumes of accessible data – which has grown excessively since the era of big data, and will no doubt continue with the advent of AI – is operationally challenging to say the least. ... With these concerns in mind, companies working on superapp development must address issues including managing and recovering from identity theft, securing data against breaches, and ensuring that data access aligns with the user’s consented sharing policy.



Quote for the day:

''Effective questioning brings insight, which fuels curiosity, which cultivates wisdom.'' -- Chip Bell