Daily Tech Digest - July 18, 2024

The Critical Role of Data Cleaning

Data cleaning is a crucial step that eliminates irrelevant data, identifies outliers and duplicates, and fixes missing values. It involves removing errors, inconsistencies, and, sometimes, even biases from raw data to make it usable. While buying pre-cleaned data can save resources, understanding the importance of data cleaning is still essential. Inaccuracies can significantly impact results. In many cases, before the removal of low-value data, the rest is still hardly usable. Cleaning works as a filter, ensuring that data passes through to the next step, which is more refined and relevant to your goals. ... At its core, data cleaning is the backbone of robust and reliable AI applications. It helps guard against inaccurate and biased data, ensuring AI models and their findings are on point. Data scientists depend on data cleaning techniques to transform raw data into a high-quality, trustworthy asset. ... Interestingly, LLMs that have been properly trained on clean data can play a significant role in the data cleaning process itself. Their advanced capabilities enable LLMs to automate and enhance various data cleaning tasks, making the process more efficient and effective.


What Is Paravirtualization?

Paravirtualization builds upon traditional virtualization by offering extra services, improved capabilities or better performance to guest operating systems. With traditional virtualization, organizations abstract the underlying resources via virtual machines to the guest so they can run them as is, says Greg Schulz, founder of the StorageIO Group, an IT industry analyst consultancy. However, those virtual machines use all of the resources assigned to them, meaning there is a great deal of idle time, even though it doesn’t appear so, according to Kalvar. Paravirtualization uses software instruction to dynamically size and resize those resources, Kalvar says, turning VMs into bundles of resources. They are managed by the hypervisor, a software component that manages multiple virtual machines in a computer. ... One of the biggest advantages of paravirtualization is that it is typically more efficient than full virtualization because the hypervisor can closely manage and optimize resources between different operating systems. Users can manage the resources they consume on a granular basis. “I’m not buying an hour of a server, I’m buying seconds of resource time,” Kalvar says. 


Leaked Access Keys: The Silent Revolution in Cloud Security

The challenge for service accounts is that MFA does not work, and network-level protection (IP filtering, VPN tunneling, etc.) is not consequently applied, primarily due to complexity and costs. Thus, service account key leaks often enable hackers to access company resources. While phishing is unusual in the context of service accounts, leakages are frequently the result of developers posting them (unintentionally) online, often in combination with code fragments that unveil the user to whom they apply. ... Now, Google has changed the game with its recent policy change. If an access key appears in a public GitHub repository, GCP deactivates the key, no matter whether applications crash. Google's announcement marks a shift in the risk and priority tango. Gone are the days when patching vulnerabilities could take days or weeks. Welcome to the fast-paced cloud era. Zero-second attacks after credential leakages demand zero-second fixing. Preventing an external attack becomes more important than avoiding crashing customer applications – that is at least Google's opinion. 


Juniper advances AI networking software with congestion control, load balancing

On the load balancing front, Juniper has added support for dynamic load balancing (DLB) that selects the optimal network path and delivers lower latency, better network utilization, and faster job completion times. From the AI workload perspective, this results in better AI workload performance and higher utilization of expensive GPUs, according to Sanyal. “Compared to traditional static load balancing, DLB significantly enhances fabric bandwidth utilization. But one of DLB’s limitations is that it only tracks the quality of local links instead of understanding the whole path quality from ingress to egress node,” Sanyal wrote. “Let’s say we have CLOS topology and server 1 and server 2 are both trying to send data called flow-1 and flow-2, respectively. In the case of DLB, leaf-1 only knows the local links utilization and makes decisions based solely on the local switch quality table where local links may be in perfect state. But if you use GLB, you can understand the whole path quality where congestion issues are present within the spine-leaf level.”


Impact of AI Platforms on Enhancing Cloud Services and Customer Experience

AI platforms enable businesses to streamline operations and reduce costs by automating routine tasks and optimizing resource allocation. Predictive analytics, powered by AI, allows for proactive maintenance and issue resolution, minimizing downtime and ensuring continuous service availability. This is particularly beneficial for industries where uninterrupted access to cloud services is critical, such as finance, healthcare, and e-commerce. ... AI platforms are not only enhancing backend operations but are also revolutionizing customer interactions. AI-driven customer service tools, such as chatbots and virtual assistants, provide instant support, personalized recommendations, and seamless user experiences. These tools can handle a wide range of customer queries, from basic information requests to complex problem-solving, thereby improving customer satisfaction and loyalty. The efficiency and round-the-clock availability of AI-driven tools make them invaluable for businesses. By the year 2025, it is expected that AI will facilitate around 95% of customer interactions, demonstrating its growing influence and effectiveness.


2 Essential Strategies for CDOs to Balance Visible and Invisible Data Work Under Pressure

Short-termism under pressure is a common mistake, resulting in an unbalanced strategy. How can we, as data leaders, successfully navigate such a scenario? “Working under pressure and with limited trust from senior management can force first-time CDOs to commit to an unbalanced strategy, focusing on short-term, highly visible projects – and ignore the essential foundation.” ... The desire to invest in enabling topics stems from the balance between driving and constraining forces. The senior management tends to ignore enabling topics because they rarely directly contribute to the bottom line; they can be a black box to a non-technical person and require multiple teams to collaborate effectively. On the other hand, Anne knew that the same people eagerly anticipated the impact of advanced analytics such as GenAI and were worried about potential regulatory risks. With the knowledge of the key enabling work packages and the motivating forces at play, Anne has everything she needs to argue for and execute a balanced long-term data strategy that does not ignore the “invisible” work required.


Gen AI Spending Slows as Businesses Exercise Caution

Generative AI has advanced rapidly over the past year, and organizations are recognizing its potential across business functions. But businesses have now taken a cautious stance regarding gen AI adoption due to steep implementation costs and concerns related to hallucinations. ... This trend reflects a broader shift away from the AI hype, and while businesses acknowledge the potential of this technology, they are also wary of the associated risks and costs, according to Michael Sinoway, CEO, Lucidworks. "The flattened spending suggests a move toward more thoughtful planning. This approach ensures AI adoption delivers real value, balancing competitiveness with cost management and risk mitigation," he said. ... Concerns regarding implementation costs, accuracy and data security have increased considerably in 2024. The number of business leaders with concerns related to implementation costs has increased 14-fold and those related to response accuracy have grown fivefold. While concerns about data security have increased only threefold, it remains the biggest worry.


CIOs are stretched more than ever before — and that’s a good thing

“Many CIOs have built years of credibility and trust by blocking and tackling the traditional responsibilities of the role,” she adds. “They’re now being brought to the conversation as business leaders to help the organization think through transformational priorities because they’re functional experts like any other executive in the C-suite.” ... “Boards want technology to improve the top and bottom line, which can be a tough balance, even if it’s one that CIOs are getting used to managing,” says Nash Squared’s White. “On the one hand, they’re being asked to promote innovation and help generate revenue, and on the other, they’re often charged with governance and security, too.” The importance of technology will only continue to increase going forward as well. Gen AI, for example, will make it possible to boost productivity while reducing costs. CyberArk’s Grossman expects the central role of digital leaders in exploiting these emerging technologies will mean high-level CIOs will be even more important in the future.


What Is a Sovereign Cloud and Who Truly Benefits From It?

A sovereign cloud is a cloud computing environment designed to help organizations comply with regulatory rules established by a particular government. This often entails ensuring that data stored within the cloud environment remains within a specific country. But it can also involve other practices, as we explain below. ... For one thing, cost. In general, cloud computing services on a sovereign cloud cost more than their equivalents on a generic public cloud. The exact pricing can vary widely depending on a number of factors, such as which cloud regions you select and which types of services you use, but in general, expect to pay a premium of at least 15% to use a sovereign cloud. A second challenge of using sovereign clouds is that in some cases your organization must undergo a vetting process to use them because some sovereign cloud providers only make their solutions available to certain types of organizations — often, government agencies or contractors that do business with them. This means you can't just create a sovereign cloud account and start launching workloads in a matter of minutes, as you could in a generic public cloud.


Securing datacenters may soon need sniffer dogs

So says Len Noe, tech evangelist at identity management vendor CyberArk. Noe told The Register he has ten implants – passive devices that are observable with a full body X-ray, but invisible to most security scanners. Noe explained he's acquired swipe cards used to access controlled premises, cloned them in his implants, and successfully entered buildings by just waving his hands over card readers. ... Noe thinks hounds are therefore currently the only reliable means of finding humans with implants that could be used to clone ID cards. He thinks dogs should be considered because attackers who access datacenters using implants would probably walk away scot-free. Noe told The Register that datacenter staff would probably notice an implant-packing attacker before they access sensitive areas, but would then struggle to find grounds for prosecution because implants aren't easily detectable – and even if they were the information they contain is considered medical data and is therefore subject to privacy laws in many jurisdictions.



Quote for the day:

"Leadership is liberating people to do what is required of them in the most effective and humane way possible." -- Max DePree

Daily Tech Digest - July 17, 2024

Optimization Techniques For Edge AI

Edge devices often have limited computational power, memory, and storage compared to centralised servers. Due to this, the cloud-centric ML models need to be retargeted so that they fit in the available resource budget. Further, many edge devices run on batteries, making energy efficiency a critical consideration. The hardware diversity in edge devices ranging from microcontrollers to powerful edge servers, each with different capabilities and architectures requires different model refinement and retargeting strategies. ... Many use cases involve the distributed deployment of numerous IoT or edge devices, such as CCTV cameras, working collaboratively towards specific objectives. These applications often have built-in redundancy, making them tolerant to failures, malfunctions, or less accurate inference results from a subset of edge devices. Algorithms can be employed to recover from missing, incorrect, or less accurate inputs by utilising the global information available. This approach allows for the combination of high and low accuracy models to optimise resource costs while maintaining the required global accuracy through the available redundancy.


The Cyber Resilience Act: A New Era for Mobile App Developers

Collaboration is key for mobile app developers to prepare for the CRA. They should first conduct a thorough security audit of their apps, identifying and addressing any vulnerabilities. Then, they’ll want to implement a structured plan to integrate the needed security features, based on the CRA’s checklist. It may also make sense to invest in a partnership with cybersecurity experts who can more efficiently provide more insights and help streamline this process in general. Developers cannot be expected to become top-notch security experts overnight. Working with cybersecurity firms, legal advisors and compliance experts can clarify the CRA and simplify the path to compliance and provide critical insights into best practices, regulatory jargon and tech solutions, ensuring that apps meet CRA standards and maintain innovation. It’s also important to note that keeping comprehensive records of compliance efforts is essential under the CRA. Developers should establish a clear process for documenting security measures, vulnerabilities addressed, and any breaches or other incidents that were identified and remediated. 


Sometimes the cybersecurity tech industry is its own worst enemy

One of the fundamental infosec problems facing most organizations is that strong cybersecurity depends on an army of disconnected tools and technologies. That’s nothing new — we’ve been talking about this for years. But it’s still omnipresent. ... To a large enterprise, “platform” is a code word for vendor lock-in, something organizations tend to avoid. Okay, but let’s say an organization was platform curious. It could also take many months or years for a large organization to migrate from distributed tools to a central platform. Given this, platform vendors need to convince a lot of different people that the effort will be worth it — a tall task with skeptical cybersecurity professionals. ... Fear not, for the security technology industry has another arrow in its quiver — application programming interfaces (APIs). Disparate technologies can interoperate by connecting via their APIs, thus cybersecurity harmony reigns supreme, right? Wrong! In theory, API connectivity sounds good, but it is extremely limited in practice. For it to work well, vendors have to open their APIs to other vendors. 


How to Apply Microservice Architecture to Embedded Systems

In short, the process of deploying and upgrading microservices for an embedded system has a strong dependency on the physical state of the system’s hardware. But there’s another significant constraint as well: data exchange. Data exchange between embedded devices is best implemented using a binary data format. Space and bandwidth capacity are limited in an embedded processor, so text-based formats such as XML and JSON won’t work well. Rather, a binary format such as protocol buffers or a custom binary format is better suited for communication in an MOA scenario in which each microservice in the architecture is hosted on an embedded processor. ... Many traditional distributed applications can operate without each microservice in the application being immediately aware of the overall state of the application. However, knowing the system’s overall state is important for microservices running within an embedded system. ... The important thing to understand is that any embedded system will need a routing mechanism to coordinate traffic and data exchange among the various devices that make up the system.


How to assess a general-purpose AI model’s reliability before it’s deployed

But these models, which serve as the backbone for powerful artificial intelligence tools like ChatGPT and DALL-E, can offer up incorrect or misleading information. In a safety-critical situation, such as a pedestrian approaching a self-driving car, these mistakes could have serious consequences. To help prevent such mistakes, researchers from MIT and the MIT-IBM Watson AI Lab developed a technique to estimate the reliability of foundation models before they are deployed to a specific task. They do this by considering a set of foundation models that are slightly different from one another. Then they use their algorithm to assess the consistency of the representations each model learns about the same test data point. If the representations are consistent, it means the model is reliable. When they compared their technique to state-of-the-art baseline methods, it was better at capturing the reliability of foundation models on a variety of downstream classification tasks. Someone could use this technique to decide if a model should be applied in a certain setting, without the need to test it on a real-world dataset. 


The Role of Technology in Modern Product Engineering

Product engineering has seen a significant transformation with the integration of advanced technologies. Tools like Computer-Aided Design (CAD), Computer-Aided Manufacturing (CAM), and Computer-Aided Engineering (CAE) have paved the way for more efficient and precise engineering processes. The early adoption of these technologies has enabled businesses to develop multi-million dollar operations, demonstrating the profound impact of technological advancements in the field. ... Deploying complex software solutions often involves customization and integration challenges. Addressing these challenges requires close client engagement, offering configurable options, and implementing phased customization. ... The future of product engineering is being shaped by technology integration, strategic geographic diversification, and the adoption of advanced methodologies like DevSecOps. As the tech landscape evolves with trends such as AI, Augmented Reality (AR), Virtual Reality (VR), IoT, and sustainable technology, continuous innovation and adaptation are essential.


A New Approach To Multicloud For The AI Era

The evolution from cost-focused to value-driven multicloud strategies marks a significant shift. Investing in multicloud is not just about cost efficiency; it's about creating an infrastructure that advances AI initiatives, spurs innovation and secures a competitive advantage. Unlike single-cloud or hybrid approaches, multicloud offers unparalleled adaptability and resource diversity, which are essential in the AI-driven business environment. Here are a few factors to consider. ... The challenge of multicloud is not simply to utilize a variety of cloud services but to do so in a way that each contributes its best features without compromising the overall efficiency and security of the AI infrastructure. To achieve this, businesses must first identify the unique strengths and offerings of each cloud provider. For instance, one platform might offer superior data analytics tools, another might excel in machine learning performance and a third might provide the most robust security features. The task is to integrate these disparate elements into a seamless whole. 


How Can Organisations Stay Secure In The Face Of Increasingly Powerful AI Attacks

One of the first steps any organisation should take when it comes to staying secure in the face of AI-generated attacks is to acknowledge a significant top-down disparity between the volume and strength of cyberattacks, and the ability of most organisations to handle them. Our latest report shows that just 58% of companies are addressing every security alert. Without the right defences in place, the growing power of AI as a cybersecurity threat could see that number slip even lower. ... Fortunately, there is a solution: low-code security automation. This technology gives security teams the power to automate tedious and manual tasks, allowing them to focus on establishing an advanced threat defence. ... There are other benefits too. These include the ability to scale implementations based on the team’s existing experience and with less reliance on coding skills. And unlike no-code tools that can be useful for smaller organisations that are severely resource-constrained, low-code platforms are more robust and customisable. This can result in easier adaptation to the needs of the business.


Time for reality check on AI in software testing

Given that AI-augmented testing tools are derived from data used to train AI models, IT leaders will also be more responsible for the security and privacy of that data. Compliance with regulations like GDPR is essential, and robust data governance practices should be implemented to mitigate the risk of data breaches or unauthorized access. Algorithmic bias introduced by skewed or unrepresentative training data must also be addressed to mitigate bias within AI-augmented testing as much as possible. But maybe we’re getting ahead of ourselves here. Because even with AI’s continuing evolution, and autonomous testing becomes more commonplace, we will still need human assistance and validation. The interpretation of AI-generated results and the ability to make informed decisions based on those results will remain a responsibility of testers. AI will change software testing for the better. But don’t treat any tool using AI as a straight-up upgrade. They all have different merits within the software development life cycle. 


Overlooked essentials: API security best practices

In my experience, there are six important indicators organizations should focus on to detect and respond to API security threats effectively – shadow APIs, APIs exposed to the internet, APIs handling sensitive data, unauthenticated APIs, APIs with authorization flaws, APIs with improper rate limiting. Let me expand on this further. Shadow APIs: Firstly, it’s important to identify and monitor shadow APIs. These are undocumented or unmanaged APIs that can pose significant security risks. Internet-exposed APIs: Limit and closely track the number of APIs accessible publicly. These are more prone to external threats. APIs handling sensitive data: APIs that process sensitive data and are also publicly accessible are among the most vulnerable. They should be prioritized for security measures. Unauthenticated APIs: An API lacking proper authentication is an open invitation to threats. Always have a catalog of unauthenticated APIs and ensure they are not vulnerable to data leaks. APIs with authorization flaws: Maintain an inventory of APIs with authorization vulnerabilities. These APIs are susceptible to unauthorized access and misuse. Implement a process to fix these vulnerabilities as a priority.



Quote for the day:

"The successful man doesn't use others. Other people use the successful man. For above all the success is of service" -- Mark Kainee

Daily Tech Digest - July 16, 2024

Learning cloud cost management the hard way

The rapid adoption of cloud technologies has outpaced the development of requisite skills within many organizations, leading to inefficiencies in provisioning, managing, and optimizing cloud resources. The No. 1 excuse that I hear from those overspending on cloud computing is that they can’t find the help they need to maximize cloud resources. They are kicking years of cloud-powered technical debt down the road, hoping that someone or some tool will come along to fix everything. ... Automation tools powered by AI can play a crucial role in ensuring that resources are only provisioned when needed and decommissioned when not in use, thus preventing idle resources from unnecessarily accruing costs. Moreover, a robust cost governance framework is essential for cloud cost management. This framework should include policies for resource provisioning, usage monitoring, and cost optimization. ... It’s frustrating that we’ve yet to learn how to do this correctly. 2020 wants their cloud spending problems back. This is not the only survey I’ve seen that reveals cost inefficacies on a massive scale. I see this myself.


The Interplay Of IoT And Critical Infrastructure Security

The vast number of interconnected devices in an IoT-driven infrastructure creates a massive attack surface. These objects often have limited processing power and may miss out on robust security features, which potentially makes them easy targets. ... The silver lining is that such heterogeneous cabling architectures can be glued together to deliver fiber-grade connectivity without the need to build new high-cost networks from scratch. An illustration of this tactic is Actelis’ hybrid-fiber technology harnessing high-performance managed Ethernet access switches and extenders to make the most of existing network infrastructures and provide gigabit speeds via virtually any wireline media. Actelis’ hybrid-fiber networking concept includes sections of fiber (for the easy-to-reach-with-fiber locations) and copper/coax that can be upgraded with Actelis’ technology to run fiber-grade communication. The company does both and provides management, security, and end-to-end integration for such entire networks, including fiber parts. This is important, as it represents a significant part of the market, selling both fiber and non-fiber networking.


MIT Researchers Introduce Generative AI for Databases

The researchers noticed that SQL didn’t provide an effective way to incorporate probabilistic AI models, but at the same time, approaches that use probabilistic models to make inferences didn’t support complex database queries. They built GenSQL to fill this gap, enabling someone to query both a dataset and a probabilistic model using a straightforward yet powerful formal programming language. A GenSQL user uploads their data and probabilistic model, which the system automatically integrates. Then, she can run queries on data that also get input from the probabilistic model running behind the scenes. This not only enables more complex queries but can also provide more accurate answers. For instance, a query in GenSQL might be something like, “How likely is it that a developer from Seattle knows the programming language Rust?” ... Incorporating a probabilistic model can capture more complex interactions. Plus, the probabilistic models GenSQL utilizes are auditable, so people can see which data the model uses for decision-making. In addition, these models provide measures of calibrated uncertainty along with each answer.


Securing Applications in an Evolving API Security Landscape

Adding further fuel to the fire, threat actors constantly innovate, developing new techniques to target APIs. This includes exploiting zero-day vulnerabilities, leveraging stolen credentials through phishing attacks, or even using bots to automate brute-force attacks against API endpoints. Traditionally, API security focused on reactive measures, patching vulnerabilities and detecting attacks after they happened. However, GenAI allows attackers to automate tasks, churning out mass phishing campaigns or crafting malicious code specifically designed to exploit API weaknesses. These attacks, known for their speed and volume, easily overwhelm traditional security solutions designed for static environments. ... APIs are crucial for modern business operations, driving innovation and customer interactions. However, without proper security measures, APIs can be vulnerable to attacks that can put sensitive data at risk, disrupt operations, and damage customer trust. ... This underscores the real-world impact of inadequate API security, including delayed innovation, frustrated customers, and lost revenue. 


Fundamentals of Descriptive Analytics

Descriptive analytics helps to describe and present data in a format that can be easily understood by a wide variety of business readers. Descriptive analytics rarely attempts to investigate or establish cause-and-effect relationships. As this form of analytics doesn’t usually probes beyond surface analysis, the validity of results is more easily implemented. Some common methods employed in descriptive analytics are observations, case studies, and surveys. ... The main disadvantage of descriptive analytics is that it only reports what has happened in the past or what is happening now without explaining the root causes behind the observed behaviors or without predicting what is about to happen in the future. The analysis is generally limited to few variables and their relationships. However, descriptive analytics becomes a powerful business resource when it is combined with other types of analytics for assessing business performance. While descriptive analytics focuses on reporting past or current events, the other types of analytics explore the root causes behind observed trends and can also predict future outcomes based on historical data analysis.


Strategies for creating seamless and personalised customer journeys

One cannot overlook the fact that CX in financial services is crucial since it influences customer satisfaction levels. In turn, this depends on providing seamless experiences across all touchpoints, beginning with the initial contact, right up to the final delivery and the consumption of financial products or services. Significantly, CX in financial services goes beyond merely offering excellent customer service. Instead, it involves ensuring frictionless, personalised experiences that surpass expectations across every stage of the customer’s journey. Be it banking, insurance or investment services, the ongoing digital transformation has substantially enhanced the expectations of customers. Today, speed, convenience and personalisation are taken for granted. As a result, financial entities that provide superior CX enjoy a clear competitive advantage in both attracting and retaining their customers, leading to increased profitability. ... Navigating complex regulatory environments ensures financial integrity. Regulations uphold standards, protecting consumers. Compliant firms earn trust, enhancing reputation alongside prioritizing customer experience for competitiveness in the market.


Cultivating Continuous Learning: Transforming L&D for the Hybrid Future and Beyond

Adapting L&D for the Digital Age requires strategic initiatives that cater to evolving demands in today's dynamic landscape. This entails embracing technology-driven learning platforms and tools that facilitate remote access, personalised learning paths, and real-time analytics. By leveraging these innovations, organisations can ensure that learning experiences are agile, responsive, and tailored to individual needs. Integrating digital collaboration tools fosters a culture of continuous learning and knowledge sharing across geographies, enhancing organisational agility and competitiveness in a digital-first world. Future-proofing L&D involves identifying and developing critical skills that will drive success in tomorrow's workplace. This proactive approach requires foresight into emerging trends and industry shifts, preparing employees with skills such as adaptability, digital literacy, creativity, and emotional intelligence. Implementing forward-thinking training programs and certifications ensures that employees remain adept and resilient in the face of technological advancements and market disruptions.


Cybersecurity Can Be a Businesses Enabler

Cybersecurity controls and protective mechanisms can protect an organization's assets - its data, people, technology equipment, etc. By actively protecting assets and preventing data breaches, an organization can avoid potential negative business impact, financial or otherwise. And because the organization does not have to worry about that potential damage, it can operate in a safe and focused fashion. ... Suffering a negative security event indicates some gap or deficiency in an organization's security posture. All organizations have security gaps, but some never report a negative security event - probably because they have invested more resources in differentiating themselves from their competitors. Implementing strong protective measures shows customers that an organization takes security seriously, which makes it a more appealing business partner. ... Customers and partners are becoming more aware of cyber risks, and they prioritize cybersecurity when they consider engaging in business. By implementing effective cybersecurity measures, a company can improve the confidence that potential customers and partners have in it. Over time, this will lead to increased loyalty and trust.


What is transformational leadership? A model for motivating innovation

The most important thing you can do as a transformational leader is to lead by example. Employees will look to you as a role model for behavior in all areas of the workplace. If you lead with authenticity, employees will pick up on that behavior and feel inspired to maintain that high standard for performance. It’s not about manipulating employees into working hard, it’s about leading by example and positively influencing others through a commitment to trust, transparency, and respect. ... To help create change, it’s important to challenge long-standing beliefs in the company and push the status quo by encouraging innovation, creativity, critical thinking, and problem-solving. Transformative leaders should help employees feel comfortable exploring new ideas and opportunities that can inject innovation into the organization. ... As a transformational leader, you will need to encourage your team to feel attached and committed to the vision of the organization. You want to ensure employees feel as committed to these goals as you do as a leader by giving employees a strong sense purpose, rather than attempting to motivate them through fear.


How Post-Quantum Cryptography Can Ensure Resilience

Quantum computing represents a major threat to data security, as it can make attacks against cryptography much more efficient. There are two ways bad actors could use this technology. One is the “Store now, decrypt later” method, in which cybercriminals steal sensitive data and wait until quantum computers have the ability to break its encryption. This is particularly important for you to know if your organization retains data with a long confidentiality span. The other method is to break the data’s digital signatures. A bad actor could “compute” credentials based on publicly available information, then impersonate someone with the authority to sign documents or approve requests. As with the above message, criminals can do this retroactively if older signatures are not updated. Today’s encryption methods cannot stand against the capabilities of tomorrow’s quantum computers. When large-scale quantum computers are built, they will have the computing ability to decrypt many of the current public key cryptography systems. 



Quote for the day:

"The only limit to our realization of tomorrow will be our doubts of today." -- Frank D Roosevelt

Daily Tech Digest - July 15, 2024

For databases that need to hold and query hundreds of petabytes to exabytes of data, there are only a few viable choices. Be very careful about evaluating and planning exabyte-scale databases, as they are huge commitments and hard to change once the data is loaded. ... Data consistency is usually “strong” for SQL databases, meaning that all reads return the latest data. Data consistency may be anything from “eventual” to “strong” for NoSQL databases. Eventual consistency offers lower latency, at the risk of reading stale data. Consistency is the “C” in the ACID properties required for validity in the event of errors, network partitions, and power failures. The four ACID properties are Atomicity, Consistency, Isolation, and Durability. ... Some databases are faster at reads and queries, and others are faster at writes. The mix of reads and writes you expect from your application is a useful number to include in your database selection criteria, and can guide your benchmarking efforts. The optimum choice of index type differs between read-heavy applications and write-heavy applications.


Can we please stop talking about replacing employees with AI?

With mundane, repetitive tasks taken over by AI, your team will have more room for creative, human work. Shift your organizational structure and job descriptions to open up room for the strategic, relationship-oriented work that AI just can’t do as well as a human. With less on employees’ to do lists, leaders and managers can shift their focus to challenging and developing them in other more valuable areas, such as building relationships and soft skills. AI can help here, too, but not in the way you may think. Rather than replacing humans, it can unlock insights (from data most companies already have) that lead to hiring and nurturing the right humans for the job. Software used to identify the important soft skills that lead to success in specific roles can be extremely valuable for both employers and employees. Ultimately, improvements in job performance and satisfaction will boost morale and your bottom line — a win-win. ... A good leader knows that for any organization to survive, people cannot be replaced. AI should instead help make employees exponentially more valuable to their companies by spending more time on creative productivity, and for companies to become more valuable to employees by improving their lives.


The Complexity of Solving Performance Problems

Our brains work in two different modes, particularly as we’re solving problems: intuitive and analytical. Intuitive is the mode that our brain is in when we’re not actively and effortfully thinking about something. It’s automatic. Analytical is the mode where we’re actually applying effort and working diligently in a focused manner in our brains. ... Nobody ever says, “Come fix my system,” without telling you what the problem is, right? They might just tell you that it’s slow, but there’s almost always something along with that. Maybe “It’s slow and I think there’s an IO problem” or “It’s slow and it looks like we’re pegging the CPU between two and three o’clock in the morning.” There’s always a little bit of information. That information gives us the ability to start theorizing early. That’s the main characteristic of this approach. ... With the methodical approach, we’re basically going dogmatically through the steps outlined above. And the key characteristic of this one is that we have the mental discipline to postpone that theorizing step until after we’ve gathered the data.


Liquid cooling solutions for high-density computing applications

One of the major challenges when designing data centers is anticipating end-of-life options and accurately predicting the nominal heat load from the beginning. This unpredictability often leads to a disconnect between projected and actual heat demands. For instance, a system designed for 1MW might only require 150kW in reality. Liquid cooling intensifies this challenge due to fluctuating cooling demands; as servers process large amounts of data, energy demand, and heat rejection peak, then drop drastically when idle. ... Liquid cooling offers significant sustainability gains and surpasses traditional air cooling methods in efficiency. By capturing heat directly at its source, liquid cooling allows operation at higher temperatures within the cooling loop. This facilitates the use of free cooling techniques, leveraging ambient air for cooling without resorting to additional energy-intensive processes. Moreover, the captured heat can be reused, further enhancing energy efficiency and sustainability.


What to do if certificates for passive authentication fail

If you have just one source and it becomes unavailable, your entire authentication process can be compromised. A wise solution would be to use a combination of sources to ensure higher service availability. At the government-to-government level, countries share certificates for passive authentication with each other through diplomatic channels, thus collecting their own database of trusted certificates. ... No matter which sources you trust to get certificates and master lists from—be it a central repository or each publisher directly—the responsibility remains yours. Organizations like the ICAO or BSI may facilitate the process, but they are not liable if something goes wrong. Also, understanding the certificate system and its expiration dates is crucial. You need to set up alerts for approaching expiration dates to ensure you have a fresh certificate ready to replace the old one. The most responsible organizations don’t rely solely on external sources. They collect certificates from various countries, create their own master list, sign it with their own certificate, and ensure that it’s valid.


Ignoring quantum threats in CBDC design is reckless

Relying on quantum mechanics phenomena, it far outpaces the ability of classical computers and supercomputers to solve complex algorithms in both speed and capability. Public-key cryptography is one of its primary targets. Thus, it’s alarming that developers of CBDCs are not prioritizing quantum-resistant designs in their early technical roadmaps. As RAND mathematician Alvin Moon noted, “A steady stream of advances in quantum computing technology is undeniably increasing the urgency to standardize and implement post-quantum algorithms, or algorithms which are resistant to both present-day computing and on-the-horizon quantum computing.” This security flaw represents a serious vulnerability in the US monetary system if central bank digital currencies are eventually adopted by the Federal Reserve and authorized by Congress. Valid concerns about surveillance and potential harm to civil liberties are driving debates and may positively influence laws, regulations and ethical governance standards. Technical designs continue to evolve, albeit slowly, to address privacy risks. However, without secure and resilient technical rails, CBDC adoption will remain a serious risk.


A CISO's Summary Of The Cyber Resilience Act

As a baseline, you need to make sure that whatever you’re building is as secure as is reasonable to be. Your product must have minimal attack surfaces and should be hardened. This means that by default it should not allow unauthorized access; its data is encrypted or protected; and its data and commands can’t be intercepted or manipulated. It also means that your product must keep working, even under a DoS attack, and that it mustn’t interrupt other devices, even when attacked with exploits. Additionally, your product needs to be able to monitor or log changes in the device, as well as receive security updates or rollbacks. This includes direct or remote updates, user notifications about updates and the ability to roll back updates or reset the product to a factory/default state. On top of that, you’ll also need to follow or provide documentation and information about your product, including a description of the design, development and vulnerability handling process. This would also include a formal SBOM, where you’d include more information on the components in your product and its potential vulnerabilities.


The Compelling Need for Low-Code in Legacy Application Modernization

With technology at the centre of India’s transformation initiatives, it is imperative that its base digital infrastructure undergoes transformation if the nation hopes to achieve its goals. However, in this quest to become a digital leader lies a huge challenge: legacy systems. Even today, in many organisations, legacy systems are still used to run daily processes and are a huge hurdle in the path of digital transformation. Legacy systems not only consume a significant portion of IT budgets but also make businesses incapable of adapting to fast-changing customer needs. ... There is another compelling method that has gained huge traction in recent times. Low-code visual software development, which helps developers accelerate the software development process using a friendly and intuitive drag-and-drop process. The low-code development approach offers organisations the ability to modernise their core systems without the cost and risk of traditional development approaches. Let us look at some of the common legacy systems that are prevalent in most organisations today and understand how low-code platforms can be used effectively. 


How To Establish A Culture Of Cyber-Resilience Within An Organization

Workers ought to be aware of how their activities affect the security posture of the company and should feel confident enough to serve as the first line of defense. Additionally essential is routinely upgrading training programs according to the most recent dangerous environment. Organizations may greatly lower their susceptibility to cyberattacks by encouraging a culture of ongoing education and awareness. ... Incorporate cybersecurity into all company operations to ensure consideration of security issues during each undertaking and decision-making process. This interconnectedness allows each department to understand its responsibility in safeguarding the company’s digital assets, leading to a holistic approach to security. One essential first step is to create security rules and processes that support company goals. Making sure cybersecurity needs are part of the planning and development phases of a project facilitates early risk identification and the application of suitable safeguards. ... More than simply putting security technology into place, an organization must develop a cyber-resilience culture. 


Mastering Long-Running Processes in Modern Architectures

It is important to note that long-running processes refer to the act of waiting, not algorithms running for extended periods. This waiting can be due to human actions, external responses, or intentionally allowing time to pass. These processes can take hours, days, weeks, or even longer. ... Besides business reasons for waiting, there are also technical ones, such as asynchronous communication delays, failures in message delivery, and the unavailability of peer services in distributed systems. If not addressed, these issues can lead to cascading failures. ... As soon as you start handling the complexities of distributed systems and long-running processes in payment processing, it is essential to embrace asynchronous communication in your API design. Although payments are usually quick and straightforward, situations like declined credit cards or unavailable services demand a different approach. We can build more adaptable and robust payment systems by designing systems that can handle immediate and delayed responses and using signals like a 202 HTTP code to indicate that processing will continue in the background.



Quote for the day:

"Your problem isn't the problem. Your reaction is the problem." -- Anonymous

Daily Tech Digest - July 13, 2024

Work in the Wake of AI: Adapting to Algorithmic Management and Generative Technologies

Current legal frameworks are struggling to keep pace with the issues arising from algorithmic management. Traditional employment laws, such as those concerning unfair dismissal, often do not extend protections to “workers” as a distinct category. Furthermore, discrimination laws require proof that the discriminatory behaviour was due or related to the protected characteristic, which is difficult to ascertain and prove with algorithmic systems. To mitigate these issues, the researchers recommend a series of measures. These include ensuring algorithmic systems respect workers’ rights, granting workers the right to opt out of automated decisions such as job termination, banning excessive data monitoring and establishing the right to a human explanation for decisions made by algorithms. ... Despite the rapid deployment of GenAI and the introduction of policies around its use, concerns about misuse are still prevalent among nearly 40% of tech leaders. While recognising AI’s potential, 55% of tech leaders have yet to identify clear business applications for GenAI beyond personal productivity enhancements, and budget constraints remain a hurdle for some.


The rise of sustainable data centers: Innovations driving change

Data centers contribute significantly to global carbon emissions, making it essential to adopt measures that reduce their carbon footprint. Carbon usage effectiveness (CUE) is a metric used to assess a data center's carbon emissions relative to its energy consumption. By minimizing CUE, data centers can significantly lower their environmental impact. ... Cooling is one of the largest energy expenses for data centers. Traditional air cooling systems are often inefficient, prompting the need for more advanced solutions. Free cooling, which leverages outside air, is a cost-effective method for data centers in cooler climates. Liquid cooling, on the other hand, uses water or other coolants to transfer heat away from servers more efficiently than air. ... Building and retrofitting data centers sustainably involves adhering to green building certifications like Leadership in Energy and Environmental Design (LEED) and Building Research Establishment Environmental Assessment Method (BREEAM). These certifications ensure that buildings meet high environmental performance standards.


How AIOps Is Poised To Reshape IT Operations

A meaningfully different, as yet underutilized, high-value data set can be derived from the rich, complex interactions of information sources and users on the network, promising to triangulate and correlate with the other data sets available, elevating their combined value to the use case at hand. The challenge in leveraging this source is that the raw traffic data is impossibly massive and too complex for direct ingestion. Further, even compressed into metadata, without transformation, it becomes a disparate stream of rigid, high-cardinality data sets due to its inherent diversity and complexity. A new breed of AIOps solutions is poised to overcome this data deficiency and transform this still raw data stream into refined collections of organized data streams that are augmented and edited through intelligent feature extraction. These solutions use an adaptive AI model and a multi-step transformation sequence to work as an active member of a larger AIOps ecosystem by harmonizing data feeds with the workflows running on the target platform, making it more relevant and less noisy.


Addressing Financial Organizations’ Digital Demands While Avoiding Cyberthreats

The financial industry faces a difficult balancing act, with multiple conflicting priorities at the forefront. Organizations must continually strengthen security around their evolving solutions to keep up in an increasingly competitive and fast-moving landscape. But while strong security is a requirement, it cannot impact usability for customers or employees in an industry where accessibility, agility and the overall user experience are key differentiators. One of the best options to balancing these priorities is the utilization of secure access service edge (SASE) solutions. This model integrates several different security features such as secure web gateway (SWG), zero-trust network access (ZTNA), next-generation firewall (NGFW), cloud access security broker (CASB), data loss prevention (DLP) and network management functions, such as SD-WAN, into a single offering delivered via the cloud. Cloud-based delivery enables financial organizations to easily roll out SASE services and consistent policies to their entire network infrastructure, including thousands of remote workers scattered across various locations, or multiple branch offices to protect private data and users, as well as deployed IoT devices.


Three Signs You Might Need a Data Fabric

One of the most significant challenges organizations face is data silos and fragmentation. As businesses grow and adopt new technologies, they often accumulate disparate data sources across different departments and platforms. These silos make it tougher to have a holistic view of your organization's data, resulting in inefficiencies and missed opportunities. ... You understand that real-time analytics is crucial to your organization’s success. You need to respond quickly to changing market conditions, customer behavior, and operational events. Traditional data integration methods, which often rely on batch processing, can be too slow to meet these demands. You need real-time analytics to:Manage the customer experience. If enhancing a customer’s experience through personalized and timely interactions is a priority, real-time analytics is essential. Operate efficiently. Real-time monitoring and analytics can help optimize operations, reduce downtime, and improve overall efficiency. Handle competitive pressure. Staying ahead of competitors requires quick adaptation to market trends and consumer demands, which is facilitated by real-time insights.


The Tension Between The CDO & The CISO: The Balancing Act Of Data Exploitation Versus Protection

While data delivers a significant competitive advantage to companies when used appropriately, without the right data security measures in place it can be misused. This not only erodes customers’ trust but also puts the company at risk of having to pay penalties and fines for non-compliance with data security regulations. As data teams aim to extract and exploit data for the benefit of the organisation, it is important to note that not all data is equal. As such a risk-based approach must be in place to limit access to sensitive data across the organisation. In doing this the IT system will have access to the full spectrum of data to join and process the information, run through models and identify patterns, but employees rarely need access to all this detail. ... To overcome the conflict of data exploitation versus security and deliver a customer experience that meets customer expectations, data teams and security teams need to work together to achieve a common purpose and align on the culture. To achieve this each team needs to listen to and understand their respective needs and then identify solutions that work towards helping to make the other team successful.


Content Warfare: Combating Generative AI Influence Operations

Moderating such enormous amounts of content by human beings is impossible. That is why tech companies now employ artificial intelligence (AI) to moderate content. However, AI content moderation is not perfect, so tech companies add a layer of human moderation for quality checks to the AI content moderation processes. These human moderators, contracted by tech companies, review user-generated content after it is published on a website or social media platform to ensure it complies with the “community guidelines” of the platform. However, generative AI has forced companies to change their approach to content moderation. ... Countering such content warfare requires collaboration across generative AI companies, social media platforms, academia, trust and safety vendors, and governments. AI developers should build models with detectable and fact-sensitive outputs. Academics should research the mechanisms of foreign and domestic influence operations emanating from the use of generative AI. Governments should impose restrictions on data collection for generative AI, impose controls on AI hardware, and provide whistleblower protection to staff working in the generative AI companies. 


OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework

OpenAI isn't alone in attempting to quantify levels of AI capabilities. As Bloomberg notes, OpenAI's system feels similar to levels of autonomous driving mapped out by automakers. And in November 2023, researchers at Google DeepMind proposed their own five-level framework for assessing AI advancement, showing that other AI labs have also been trying to figure out how to rank things that don't yet exist. OpenAI's classification system also somewhat resembles Anthropic's "AI Safety Levels" (ASLs) first published by the maker of the Claude AI assistant in September 2023. Both systems aim to categorize AI capabilities, though they focus on different aspects. Anthropic's ASLs are more explicitly focused on safety and catastrophic risks (such as ASL-2, which refers to "systems that show early signs of dangerous capabilities"), while OpenAI's levels track general capabilities. However, any AI classification system raises questions about whether it's possible to meaningfully quantify AI progress and what constitutes an advancement. The tech industry so far has a history of overpromising AI capabilities, and linear progression models like OpenAI's potentially risk fueling unrealistic expectations.


White House Calls for Defending Critical Infrastructure

The memo encourages federal agencies "to consult with regulated entities to establish baseline cybersecurity requirements that can be applied across critical infrastructures" while maintaining agility and adaptability to mature with the evolving cyberthreat landscape. ONCD and OMB also urged agencies and federal departments to study open-source software initiatives and the benefits that can be gained by establishing a governance function for open-source projects modeled after the private sector. Budget submissions should identify existing departments and roles designed to investigate, disrupt and dismantle cybercrimes, according to the memo, including interagency task forces focused on combating ransomware infrastructure and the abuse of virtual currency. Meanwhile, the administration is continuing its push for agencies to only use software provided by developers who can attest their compliance with minimum secure software development practices. The national cyber strategy - as well as the joint memo - directs agencies to "utilize grant, loan and other federal government funding mechanisms to ensure minimum security and resilience requirements" are incorporated into critical infrastructure projects.


Unifying Analytics in an Era of Advanced Tech and Fragmented Data Estates

“Data analytics has a last-mile problem,” according to Alex Gnibus, technical product marketing manager, architecture at Alteryx. “In shipping and transportation, you often think of the last-mile problem as that final stage of getting the passenger or the delivery to its final destination. And it’s often the most expensive and time-consuming part.” For data, there is a similar problem; when putting together a data stack, enabling the business at large to derive value from the data is a key enabler—and challenge—of a modern enterprise. Achieving business value from data is the last mile, which is made difficult by complex, numerous technologies that are inaccessible to the final business user. Gnibus explained that Alteryx solves this problem by acting as the “truck” that delivers tangible business value from proprietary data, offering data discovery, use case identification, preparation and analysis, insight-sharing, and AI-powered capabilities. Acting as the easy-to-use interface for a business’ data infrastructure, Alteryx is the AI platform for large-scale enterprise analytics that offers no-code, drag-and-drop functionality that works with your unique data framework configuration as it evolves.



Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel

Daily Tech Digest - July 12, 2024

4 considerations to help organizations implement an AI code of conducts

Many organizations consider reinventing the wheel to accommodate AI tools, but this creates a significant amount of unnecessary work. Instead, they should subject any AI tool to the same rigorous procurement process that applies to any product that concerns data security. The procurement process must also take into consideration the organization’s privacy and ethical standards, to ensure these are never compromised in the name of new technology. ... It’s important to be conscious of the privacy policies of AI tools when using these in an enterprise environment — and be sure to only use these with a commercial license. To address this risk, an AI code of conduct should stipulate that free tools are categorically banned for use in any business context. Instead, employees should be required to use an approved, officially procured commercial license solution, with full privacy protections. ... Every organization needs to remain aware of how their technology vendors use AI in the products and services that they buy from them. To enable this, an AI code of conduct should also enforce policies to enable organizations to keep track of their vendor agreements.


From Microservices to Modular Monoliths

You know who really loves microservices? Cloud hosting companies like Microsoft, Amazon, and Google. They make a lot of money hosting microservices. They also make a lot of money selling you tools to manage your microservices. They make even more money when you have to scale up your microservices to handle the increased load on your system. ... So what do you do when you find yourself in microservice hell? How do you keep the gains you (hopefully) made in breaking up your legacy ball of mud, without having to constantly contend with a massively distributed system? It may be time to (re)consider the modular monolith. A modular monolith is a monolithic application that is broken up into modules. Each module is responsible for a specific part of the application. Modules can communicate with each other through well-defined interfaces. This allows you to keep the benefits of a monolithic architecture, while still being able to break up your application into smaller, more manageable pieces. Yes, you'll still need to deal with some complexity inherent to modularity, such as ensuring modules remain independent while still being able to communicate with one another efficiently. 


Deep Dive: Optimizing AI Data Storage Management

In an AI data pipeline, various stages align with specific storage needs to ensure efficient data processing and utilization. Here are the typical stages along with their associated storage requirements: Data collection and pre-processing: The storage where the raw and often unstructured data is gathered and centralized (increasingly into Data Lakes) and then cleaned and transformed into curated data sets ready for training processes. Model training and processing: The storage that feeds the curated data set into GPUs for processing. This stage of the pipeline also needs to store training artifacts such as the hyper parameters, run metrics, validation data, model parameters and the final production inferencing model. Inferencing and model deployment: The mission-critical storage where the training model is hosted for making predictions or decisions based on new data. The outputs of inferencing are utilized by applications to deliver the results, often embedded into information and automation processes. Storage for archiving: Once the training stage is complete, various artifacts such as different sets of training data and different versions of the model need to be stored alongside the raw data.


RAG (Retrieval Augmented Generation) Architecture for Data Quality Assessment

RAG is basically designed to leverage LLMs on your own content or data. It involves retrieving relevant content to augment the context or insights as part of the generation process. However, RAG is an evolving technology with both strengths and limitations. RAG integrates information retrieval from a dedicated, custom, and accurate knowledge base, reducing the risk of LLMs offering general or non-relevant responses. For example, when the knowledge base is tailored to a specific domain (e.g., legal documents for a law firm), RAG equips the LLM with relevant information and terminology, improving the context and accuracy of its responses. At the same time, there are limitations associated with RAG. RAG heavily relies on the quality, accuracy, and comprehensiveness of the information stored within the knowledge base. Incomplete, inaccurate or missing information or data can lead to misleading or irrelevant retrieved data. Overall, the success of RAG hinges on quality data. So, how are RAG models implemented? RAG has basically two key components: a retriever model and a generator model.
 

NoSQL Database Growth Has Slowed, but AI Is Driving Demand

As for MongoDB, it too is targeting generative AI use cases. In a recent post on The New Stack, developer relations team lead Rick Houlihan explicitly compared its solution to PostgreSQL, a popular open source relational database system. Houlihan contended that systems like PostgreSQL were not designed for the type of workloads demanded by AI: “Considering the well-known performance limitations of RDBMS when it comes to wide rows and large data attributes, it is no surprise that these tests indicate that a platform like PostgreSQL will struggle with the kind of rich, complex document data required by generative AI workloads.” Unsurprisingly, he concludes that using a document database (like MongoDB) “delivers better performance than using a tool that simply wasn’t designed for these workloads.” In defense of PostgreSQL, there is no shortage of managed service providers for Postgres that provide AI-focused functionality. Earlier this year I interviewed a “Postgres as a Platform” company called Tembo, which has seen a lot of demand for AI extensions. “Postgres has an extension called pgvector,” Tembo CTO Samay Sharma told me.


Let’s Finally Build Continuous Database Reliability! We Deserve It

While we worked hard to make sure our CI/CD pipelines are fast and learned how to deploy and test applications reliably, we didn’t advance our databases world. It’s time to get continuous reliability around databases as well. To do that, developers need to own their databases. Once developers take over the ownership, they will be ready to optimize the pipelines, thereby achieving continuous reliability for databases. This shift of ownership needs to be consciously driven by technical leaders. ... The primary advantage of implementing database guardrails and empowering developers to take ownership of their databases is scalability. This approach eliminates team constraints, unlocking their complete potential and enabling them to operate at their optimal speed. By removing the need to collaborate with other teams that lack comprehensive context, developers can work more swiftly, reducing communication overhead. Just as we recognized that streamlining communication between developers and system engineers was the initial step, leading to the evolution into DevOps engineers, the objective here is to eliminate dependence on other teams. 


Digital Transformation: Making Information Work for You

With information generated by digital transactions, the first goal is to ensure that the knowledge garnered does not get stuck between only those directly participating in the transaction. Lessons learned from the transaction should become part of the greater organizational memory. This does not mean that every single transaction needs to be reported to every person in the organization. It also doesn’t mean that the information needs to be elevated in the same form or at the same velocity to all recipients. Those participating in the transaction need an operational view of the transaction. This needs to happen in real time. The information is the enabler of the human-to-computer-to-human transaction and the speed of that information flow needs to be as quick as it was in the human-to-human transaction. Otherwise, it will be viewed as a roadblock instead of an enabler. As it escalates to the next level of management, the information needs to evolve to a managerial view. Managers are more interested in anomalies and outliers or data at a summary level. This level of information is no less impactful to the organizational memory but is associated with a different level of decision-making. 


Generative AI won’t fix cloud migration

The allure of generative AI lies in its promise of automation and efficiency. If cloud migration was a one-size-fits-all scenario, that would work. But each enterprise faces unique challenges based on its technological stack, business requirements, and regulatory environment. Expecting a generative AI model to handle all migration tasks seamlessly is unrealistic. ... Beyond the initial investment in AI tools, the hidden costs of generative AI for cloud migration add up quickly. For instance, running generative AI models often requires substantial computational resources, which can be expensive. Also, keeping generative AI models updated and secure demands robust API management and cybersecurity measures. Finally, AI models need continual refinement and retraining to stay relevant, incurring ongoing costs. ... Successful business strategy is about what works well and what needs to be improved. We all understand that AI is a powerful tool and has been for decades, but it needs to be considered carefully—once you’ve identified the specific problem you’re looking to solve. Cloud migration is a complex, multifaceted process that demands solutions tailored to unique enterprise needs. 


Navigating Regulatory and Technological Shifts in IIoT Security

Global regulations play a pivotal role in shaping the cybersecurity landscape for IIoT. The European Union’s Cyber Resilience Act (CRA) is a prime example, setting stringent requirements for manufacturers supplying products to Europe. By January 2027, companies must meet comprehensive standards addressing security features, vulnerability management, and supply chain security. ... The journey towards securing IIoT environments is multifaceted, requiring manufacturers to navigate regulatory requirements, technological advancements, and proactive risk management strategies. Global regulations like the EU’s Cyber Resilience Act set critical standards that drive industry-wide improvements. At the same time, technological solutions such as PKI and SBOMs play essential roles in maintaining the integrity and security of connected devices. By adopting a collaborative approach and leveraging robust security frameworks, manufacturers can create resilient IIoT ecosystems that withstand evolving cyber threats. The collective effort of all stakeholders is paramount to ensuring the secure and reliable operation of industrial environments in this new era of connectivity.


Green Software Foundation: On a mission to decarbonize software

One of the first orders of business in increasing awareness: getting developers and companies to understand what green software really is. Instead of reinventing the wheel, the foundation reviewed a course in the concepts of green software that Hussain had developed while at Microsoft. To provide an easy first step for organizations to take, the foundation borrowed from Hussain’s materials and created a new basic training course, “Principles of Green Software Engineering.” The training is only two or three hours long and level-sets students to the same playing field. ... When it comes to software development, computing inefficiencies (and carbon footprints) are more visible — bulky libraries for example — and engineers can improve it more easily. Everyday business operations, on the other hand, are a tad opaque but still contribute to the company’s overall sustainability score. Case in point: The carbon footprint of a Zoom call is harder to measure, Hussain points out. The foundation helped to define a Software Carbon Intensity (SCI) score, which applies to all business operations including software development and SaaS programs employees might use.



Quote for the day:

"Real leadership is being the person others will gladly and confidently follow." -- John C. Maxwell