Daily Tech Digest - March 20, 2024

How to deploy software to Linux-based IoT devices at scale

IoT development may be so nascent that it may not yet be part of your mainstream DevOps processes—you may still be in the early stages of experimentation. Once you’re ready to scale, you’ll need to bring IoT into the DevOps fold. Needless to say, the scale and costs of dealing with‌ thousands of deployed devices are significant. DevOps is an important approach for ensuring the seamless and efficient delivery of software development, updates, and enhancements to IoT devices. By integrating IoT development into an established workflow, you’ll gain the improved collaboration, agility, assured delivery, control, and traceability that’s part of a modern DevOps process. It’s critical to use a secure deployment process to protect your IoT devices from unauthorized access, inadvertent vulnerabilities, and malware. A secure deployment must include strong authentication methods to access the devices and the management platform. The data that is transmitted between the devices and the management platform should be protected by encryption. The manner in which the client devices connect to the platform after deployment should always be encrypted as well.


In 5 Years, Coding will be Done in Natural Language

“Because AI is a tool,” he adds, that people should be able to operate at a higher level of abstraction and become way more efficient at the job they do. Eventually, everyone is likely to be coding in the natural language, but that wouldn’t necessarily make them a software engineer or a programmer. The skills required to be a coder are far more complex than being able to put prompts in an AI tool, copying the code, or merely typing in natural language. ... Soon, there would be a programming language exclusively in our very own English language. Not to be confused with prompt engineering and writing code, the term natural language programming means that most of the coding would be done by the software in the backend. The programmer would only have to interact with the tool in English, or any other language and never even look at the code. On the contrary, a few experts believe that English cannot be a programming language because it is filled with misunderstandings. “If they’re going into machines, which will affect the lives of people, we can’t afford that level of comedy,” said Douglas Crockford when talking to AIM. 


Cybersecurity's Future: Facing Post-Quantum Cryptography Peril

The post-quantum cryptography era might not be open season on unprepared systems, he says, but rather an uneven landscape. There are layers of concerns to consider. “What I think scares me a little bit is that this type of attack is somewhat quiet,” Ho says. “The people who are going to be taking advantage of this -- the few people initially who have quantum computers as you can imagine, probably state actors -- will want to keep this on the downlow. You wouldn’t know it, but they probably already have access.” ... “From a technical perspective … being quantum-safe -- it’s a binary thing. You either are or you’re not,” says Duncan Jones, head of quantum cybersecurity with quantum computing company Quantinuum. If there is a particular computer system that an organization fails to migrate to new standards and protocols, he says that system will be vulnerable. However, the barrier to entry for access to quantum compute resources may limit the potential for early attackers who already have pockets deep enough to procure the technology. 


The New CISO: Rethinking the Role

CISOs need to be negotiators. They need to argue in favor of stronger security and convince boards and business units of the risks in terms they understand. How a CISO goes about this can vary, depending on whether board members' experience is in technology or business. Providing a demonstration that puts the technical risk into a business perspective can be helpful. CISOs should also talk with other C-level executives — as well as CISOs from other industries — to get advance buy-in and different perspectives on similar conversations they're having with their boards. ... CISOs need to be comfortable developing a risk-based approach focusing on the importance of resiliency, because attackers will get in. Developing a tested plan to respond to attacks is just as important as implementing preventative measures. … it's balancing the risk with the cost. ... CISOs should build a deeply technical team that can focus on key security practices. They should run tabletop exercises on scenarios such as a system shutdown or inability to connect to the Internet. CISOs must not rely on assumptions about how to respond; running through and testing all response plans is vital.


Architect’s Guide to a Reference Architecture for an AI/ML Data Lake

If you are serious about generative AI, then your custom corpus should define your organization. It should contain documents with knowledge that no one else has, and should only contain true and accurate information. Furthermore, your custom corpus should be built with a vector database. A vector database indexes, stores and provides access to your documents alongside their vector embeddings, which are the numerical representations of your documents. ... Another important consideration for your custom corpus is security. Access to documents should honor access restrictions on the original documents. (It would be unfortunate if an intern could gain access to the CFO’s financial results that have not been released to Wall Street yet.) Within your vector database, you should set up authorization to match the access levels of the original content. This can be done by integrating your vector database with your organization’s identity and access management solution. At their core, vector databases store unstructured data. Therefore, they should use your data lake as their storage solution.


5 ways private organizations can lead public-private cybersecurity partnerships

One tangible step that cybersecurity stakeholders can take is to build the bottom-up infrastructure that can meet JCDC’s top-down strategic vision as it attempts to descend into tactical usefulness. This can be done by encouraging the development of volunteer civil cyber defense organizations while simultaneously lobbying the federal government for support of these entities. This kind of volunteer service model is an incredibly cost-efficient way to boost national defense, save federal government resources, and assure private stakeholders about their independence. ... Unfortunately, as criticism of the JCDC emphasizes, top-down P3 efforts often fail to effectively do so due to the role of strategic parameters driving derivative mission parameters. If industry is to shape P3 cyber initiatives CISA’s more clearly toward alignment with practical tactical considerations, mapping out where innovation and adaptation comes from in the interaction of key individuals spread across a complex array of interacting organizations (particularly during a crisis) becomes a critical common capacity.


Decoding tomorrow’s risks: How data analytics is transforming risk management

With digital technologies coming in, corporations can make use of data analytics to ensure goals correlate with their strategic needs. ... Talking about the different risk management strategies, data analytics can contribute towards optimisation models, which directs data-backed resource deployment towards risk mitigation, scenario analysis, which recreates likely circumstances to calculate the effectiveness of different risk mitigation applications, and personalised answers, which supplies custom-fit replies towards certain market conditions. ... “I believe the role of data analytics in risk mitigation has become paramount, enabling organisations to make decisions based on data-driven insights. By leveraging advanced analytics techniques, such as predictive modelling and ML, we can anticipate threats and take measures to mitigate them. From a business perspective, data analytics is considered indispensable in risk management as it helps organisations identify, assess, and prioritise risks. Companies that leverage data analytics in risk management can gain an edge by minimising disruptions, maximising opportunities, and safeguarding their reputation,” Yuvraj Shidhaye, founder and director, TreadBinary, a digital application platform, mentioned.


How AI-Driven Cyberattacks Will Reshape Cyber Protection

Aside from adaptability and real-time analysis, AI-based cyberattacks also have the potential to cause more disruption within a small window. This stems from the way an incident response team operates and contains attacks. When AI-driven attacks occur, there is the potential to circumvent or hide traffic patterns. This is somewhat similar to criminal activity, where fingerprints are destroyed. Of course, the AI methodology is to change the system log analysis process or delete actionable data. Perhaps having advanced security algorithms that identify AI-based cyberattacks is the answer. ... AI has introduced challenges where security algorithms must become predictive, rapid and accurate. This reshapes cyber protection because organizations' infrastructure devices must support the methodologies. It's no longer a concern where network intrusions, malware and software applications are risk factors, but rather how AI transforms cyber protection. The shield is not broken. It requires a transformation practice for AI-based attacks.


Four easy ways to train your workforce in cybersecurity

Do your employees install all kinds of random apps and programs? Do the same thing as the phishing emails: create your own dodgy software that locks the employee's computer, blast it out to the employee database, and see who falls for it. When they have to bring their IT assets in to be unlocked and get a scolding for installing suspicious material, however harmless, the lesson will stick. ... Cyber attacks soar during festive seasons, like the upcoming Holi holiday. Set up automated reminders to your employees to remind them not to blindly open greeting mails or click on suspicious links. You can track the open and read rate of these messages to get an idea of whether people are actually paying attention. ... If your IT team is savvy and has some time to spare, they can use generative AI to create fake personas – someone from another department, a vendor, or a customer – and see if these fake personas can fool people into giving away information they should be keeping confidential. This is particularly important, because many cyber criminals today are already using generative AI to scam unwitting victims. 


Report: AI Outpacing Ability to Protect Against Threats

There are "two sides of the coin" when it comes to AI adoption, said Greg Keller, CTO at JumpCloud. Employee productivity and technology stacks being embedded into SaaS solutions are "the new frontier," he said. "Yet there are universal security concerns. There is fear of commingling or escaping of your data into public sectors. And there is a fear of using one's data on an AI platform. CTOs are concerned about their data leaking through public LLMs," Keller said. ... "We're at the tail end of understanding digital transformation. Now, we are beginning the first phase of the identity transformation. These companies have done an amazing amount of work to lift and shift their technology stacks from legacy into the cloud with one exception - overwhelmingly, it's the Microsoft Active Directory problem," Keller said. "That's still on-premises or self-managed. So they're looking at ways to modernize this. We are in the earliest phases of security shifting away from endpoint-based [security]. Now, it's about understanding access control through the identity, and this is the new frontier."



Quote for the day:

"Whatever you can do, or dream you can, begin it. Boldness has genius, power and magic in it." --Johann Wolfgang von Goethe

Daily Tech Digest - March 19, 2024

Is The Public Losing Trust In AI?

Of course, the simplest way to look at this challenge is that in order for people to trust AI, it has to be trustworthy. This means it has to be implemented ethically, with consideration of how it will affect our lives and society. Just as important as being trustworthy is being seen to be trustworthy. This is why the principle of transparent AI is so important. Transparent AI means building tools, processes, and algorithms that are understandable to non-experts. If we are going to trust algorithms to make decisions that could affect our lives, we must, at the very least, be able to explain why they are making these decisions. What factors are being taken into account? And what are their priorities? If AI needs the public's trust, then the public needs to be involved in this aspect of AI governance. This means actively seeking their input and feedback on how AI is used. Ideally, this needs to happen at both a democratic level, via elected representatives, and at a grassroots level. Last but definitely not least, AI also has to be secure. This is why we have recently seen a drive towards private AI – AI that isn't hosted and processed on huge public data servers like those used by ChatGPT or Google Gemini.


Reliable Distributed Storage for Bare-Metal CAPI Cluster

By default, most CAPI solutions will use the “Expand First” (or “RollingUpdateScaleOut” in CAPI terms) repave logic. This logic will install an additional fresh new server and add it to the cluster first, before then removing an old server. While this is useful to ensure the cluster never has less total compute capacity than before you started the repave operation, it is problematic for distributed storage clusters because you are introducing a new node without any data to the cluster, while taking away a node that does contain data. So instead, we want to use the “Contract First” repave logic for the pool of storage nodes. That way, we can remove a storage node first, then reinstall it and add it back to the cluster, thereby immediately restoring data redundancy. ... So, if a different issue causes the distributed storage software to not install properly on the new node, you can still run into trouble. For example, Portworx supports specific kernel versions, and installing new nodes with a kernel version it doesn’t support can prevent the installation from succeeding. For that reason, it’s a good idea to lock the kernel version that MaaS deploys. Reach out to us if you want to learn how to achieve that.


Evaluating databases for sensor data

The primary determinant in choosing a database is understanding how an application’s data will be accessed and utilized. A good place to begin is by classifying workloads as online analytical processing (OLAP) or online transaction processing (OLTP). OLTP workloads, traditionally handled by relational databases, involve processing large numbers of transactions by large numbers of concurrent users. OLAP workloads are focused on analytics and have distinct access patterns compared to OLTP workloads. In addition, whereas OLTP databases work with rows, OLAP queries often involve selective column access for calculations. ... Another consideration when selecting a database is the internal team’s existing expertise. Evaluate whether the benefits of adopting a specialized database justify investing in educating and training the team and whether potential productivity losses will appear during the learning phase. If performance optimization isn’t critical, using the database your team is most familiar with may suffice. However, for performance-critical applications, embracing a new database may be worthwhile despite initial challenges and hiccups.


Surviving the “quantum apocalypse” with fully homomorphic encryption

There are currently two distinct approaches to face an impending “quantum apocalypse”. The first uses the physics of quantum mechanics itself and is called Quantum Key Distribution (QKD). However, QKD only really solves the problem of key distribution, and it requires dedicated quantum connections between the parties. As such, it is not scalable to solve the problems of internet security; instead, it is most suited to private connections between two fixed government buildings. It is impossible to build internet-scale, end-to-end encrypted systems using QKD. The second solution is to utilize classical cryptography but base it on mathematical problems for which we do not believe a quantum computer gives any advantage: this is the area of post-quantum cryptography (PQC). PQC algorithms are designed to be essentially drop-in replacements for existing algorithms, which would not require many changes in infrastructure or computing capabilities. NIST has recently announced standards for public key encryption and signatures which are post-quantum secure. These new standards are based on different mathematical problems


Teams, Slack, and GitHub, oh my! – How collaborative tools can create a security nightmare

Fast and efficient collaboration is essential to today’s business, but the platforms we use to communicate with colleagues, vendors, clients, and customers can also introduce serious risks. Looking at some of the most common collaboration tools — Microsoft Teams, GitHub, Slack, and OAuth — it’s clear there are dangers presented by information sharing, as valuable as that is to business strategy. Any of these, if not safeguarded or used inappropriately, can be a tool for attackers to gain access to your network. The best protection is to ensure you are aware of these risks and apply the appropriate modifications and policies to your organization to help prevent attackers from gaining a foothold in your organization — that also means acknowledging and understanding the threats of insider risk and data extraction. Attackers often know your network better than you do. Chances are, they also know your data-sharing platforms and are targeting those as well. Something as simple as improper password sharing can allow a bad actor to phish their way into a company’s network and collaboration tools can present a golden opportunity.


Improving computational performance of AI requires upskilling of professionals in Embedded/VLSI area

Implementing AI systems or applications requires intensive computational processors and low-cost power to deploy algorithms. Here, Very Large Scale Integration (VLSI) and embedded system design play a critical role. VLSI design involves the creation and miniaturisation of complex circuits, such as processors, memory circuits, and more recently, customized hardware for AI applications. On the other hand, embedded systems are computing systems for dedicated or specific functionalities, such as smart agriculture or industrial automation. The integration of VLSI with AI has the potential to revolutionise various sectors by enabling faster, more power-efficient, and customised hardware for AI applications. ... AI-based solutions are applied in designing and deploying communication systems to significantly enhance network performance and thereby the overall user experience. Dynamic allocation of resources, such as power and bandwidth, can be done efficiently by AI algorithms, which leads to improved spectral efficiency, reduced interference, and power consumption. Intelligent beam forming using AI algorithms enables wireless systems to focus their power and frequency band for specific users or devices.


Microsoft announces collaboration with NVIDIA to accelerate healthcare and life sciences innovation with advanced cloud, AI and accelerated computing capabilities

Microsoft, NVIDIA and SOPHiA GENETICS are collaborating to leverage combined expertise in technology and genomics to develop a streamlined, scalable and comprehensive whole-genome analytical solution. As part of this collaboration, the SOPHiA DDM Software-as-a-Service platform, hosted on Azure, will be powered by NVIDIA Parabricks for SOPHiA DDM’s whole genome application. Parabricks is a scalable genomics analysis software suite that leverages full-stack accelerated computing to process whole genomes in minutes. Compatible with all leading sequencing instruments, Parabricks supports diverse bioinformatics workflows and integrates AI for accuracy and customization. ... Microsoft aims to propel healthcare and life sciences into an exciting new era of medicine, helping unlock transformative possibilities for patients worldwide. The combination of the global scale, security and advanced computing capabilities of Microsoft Azure with NVIDIA DGX Cloud and the NVIDIA Clara suite is set to accelerate advances in clinical research, drug discovery and care delivery.


How Deloitte navigates ethics in the AI-driven workforce: Involve everyone

The approach to developing an ethical framework for AI development and application will be unique for each organization. They will need to determine their use cases for AI as well as the specific guardrails, policies, and practices needed to make sure that they achieve their desired outcome while also safeguarding trust and privacy. Establishing these ethical guidelines -- and understanding the risks of operating without them -- can be very complex. The process requires knowledge and expertise across a wide range of disciplines. ... On a broader level, publishing clear ethics policies and guidelines, and providing workshops and trainings on AI ethics, were ranked in our survey as some of the most effective ways to communicate AI ethics to the workforce, and thereby ensure that AI projects are conducted with ethics in mind. ... Leadership plays a crucial role in underscoring the importance of AI ethics, determining the resources and experience needed to establish the ethics policies for an organization, and ensuring that these principles are rolled out. This was one reason we explored the topic of AI ethics from the C-suite perspective. 


How to stop data from driving government mad

This would be a start, but everybody in large organisations knows that top-down initiatives from the centre rarely work well at the coalface. If the JAAC is to be effective at converting data into information, what insight could it glean from structures that have evolved to do this? And what could it learn from scientific fields that manage this successfully? First, deep neural networks learn by repeatedly passing information back and forth until every neurone is tuned to achieve the same objective. Information flow in both directions is the key. Neil Lawrence, DeepMind professor of machine learning at the University of Cambridge, notes that in government, "People at the coal face have a better understanding of the right interventions, although not what the central policy might be; a successful centre will have a co-ordinating function driven by an AI strategy, but will devolve power to the departments, professions, and regulators to implement it." Or, as Jess Montgomery, director of AI@Cam says: "Getting government data - and AI - ready will require foundational work, for example in data curation and pipeline building." 


Continuous Improvement as a Team

Conducting regular Retrospectives enables teams to pause and reflect on their past actions, practices, and workflows, pinpointing both strengths and areas for improvement. This continuous feedback loop is critical for adapting processes, enhancing team dynamics, and ensuring the team remains agile and responsive to change. Guarantee the consistency of your Retrospectives at every Sprint's conclusion. Before these sessions, collaboratively plan an agenda that promotes openness and inclusivity. Facilitators should incorporate practices such as anonymous feedback mechanisms and engaging games to ensure honest and constructive discussions, setting the stage for meaningful progress and team development. ... Effective stakeholder collaboration ensures the team’s efforts align with the broader business goals and customer needs. Engaging stakeholders throughout the development process invites diverse perspectives and feedback, which can highlight unforeseen areas for improvement and ensure that the product development is on the right track. Engage your stakeholders as a team, starting with the Sprint Reviews. 



Quote for the day:

“There's a lot of difference between listening and hearing.” -- G. K. Chesterton

Daily Tech Digest - March 18, 2024

Generative AI will turn cybercriminals into better con artists. AI will help attackers to craft well-written, convincing phishing emails and websites in different languages, enabling them to widen the nets of their campaigns across locales. We expect to see the quality of social engineering attacks improve, making lures more difficult for targets and security teams to spot. As a result, we may see an increase in the risks and harms associated with social engineering – from fraud to network intrusions. ... AI is driving the democratisation of technology by helping less skilled users to carry out more complex tasks more efficiently. But while AI improves organisations’ defensive capabilities, it also has the potential for helping malicious actors carry out attacks against lower system layers, namely firmware and hardware, where attack efforts have been on the rise in recent years. Historically, such attacks required extensive technical expertise, but AI is beginning to show promise to lower these barriers. This could lead to more efforts to exploit systems at the lower level, giving attackers a foothold below the operating system and the industry’s best software security defences.


Get the Value Out of Your Data

A robust data strategy should have clearly defined outcomes and measurements in place to trace the value it delivers. However, it is important to acknowledge the need for flexibility during the strategic and operational phases. Consequently, defining deliverables becomes crucial to ensure transparency in the delivery process. To achieve this, adopting a data product approach focused on iteratively delivering value to your organization is recommended. The evolution of DevOps, supported by cloud platform technology, has significantly improved the software engineering delivery process by automating development and operational routines. Now, we are witnessing a similar agile evolution in the data management area with the emergence of DataOps. DataOps aims to enhance the speed and quality of data delivery, foster collaboration between IT and business teams, and reduce the associated time and costs. By providing a unified view of data across the organization, DataOps enables faster and more confident data-driven decision-making, ensuring data accuracy, up-to-datedness, and security. It automates and brings transparency to the measurements required for agile delivery through data product management.


Exposure to new workplace technologies linked to lower quality of life

Part of the problem is that IT workers need to stay updated with the newest tech trends and figure out how to use them at work, said Ryan Smith, founder of the tech firm QFunction, also unconnected with the study. The hard part is that new tech keeps coming in, and workers have to learn it, set it up, and help others use it quickly, he said. “With the rise of AI and machine learning and the uncertainty around it, being asked to come up to speed with it and how to best utilize it so quickly, all while having to support your other numerous IT tasks, is exhausting,” he added. “On top of this, the constant fear of layoffs in the job market forces IT workers to keep up with the latest technology trends in order to stay employable, which can negatively affect their quality of life.” ... “As IT has become the backbone of many businesses, that backbone is key to the businesses operations, and in most cases revenue,” he added. “That means it’s key to the business’s survival. IT teams now must be accessible 24 hours a day. In the face of a problem, they are expected to work 24 hours a day to resolve it. ...”


6 best operating systems for Raspberry Pi 5

Even though it has been nearly seven years since Microsoft debuted Windows on Arm, there has been a noticeable lack of ARM-powered laptops. The situation is even worse for SBCs like the Raspberry Pi, which aren’t even on Microsoft’s radar. Luckily, the talented team at WoR project managed to find a way to install Windows 11 on Raspberry Pi boards. ... Finally, we have the Raspberry Pi OS, which has been developed specifically for the RPi boards. Since its debut in 2012, the Raspberry Pi OS (formerly Raspbian) has become the operating system of choice for many RPi board users. Since it was hand-crafted for the Raspberry Pi SBCs, it’s faster than Ubuntu and light years ahead of Windows 11 in terms of performance. Moreover, most projects tend to favor Raspberry Pi OS over the alternatives. So, it’s possible to run into compatibility and stability issues if you attempt to use any other operating system when attempting to replicate the projects created by the lively Raspberry Pi community. You won’t be disappointed with the Raspberry Pi OS if you prefer a more minimalist UI. That said, despite including pretty much everything you need to use to make the most of your RPi SBC, the Raspberry Pi OS isn't as user-friendly as Ubuntu.


Speaking without vocal cords, thanks to a new AI-assisted wearable device

The breakthrough is the latest in Chen's efforts to help those with disabilities. His team previously developed a wearable glove capable of translating American Sign Language into English speech in real time to help users of ASL communicate with those who don't know how to sign. The tiny new patch-like device is made up of two components. One, a self-powered sensing component, detects and converts signals generated by muscle movements into high-fidelity, analyzable electrical signals; these electrical signals are then translated into speech signals using a machine-learning algorithm. The other, an actuation component, turns those speech signals into the desired voice expression. The two components each contain two layers: a layer of biocompatible silicone compound polydimethylsiloxane, or PDMS, with elastic properties, and a magnetic induction layer made of copper induction coils. Sandwiched between the two components is a fifth layer containing PDMS mixed with micromagnets, which generates a magnetic field. Utilizing a soft magnetoelastic sensing mechanism developed by Chen's team in 2021, the device is capable of detecting changes in the magnetic field when it is altered as a result of mechanical forces—in this case, the movement of laryngeal muscles.


We can’t close the digital divide alone, says Cisco HR head as she discusses growth initiatives

At Cisco, we follow a strengths-based approach to learning and development, wherein our quarterly development discussions extend beyond performance evaluations to uplifting ourselves and our teams. We understand that a one-size-fits-all approach is inadequate. To best play to our employees' strengths, we have to be flexible, adaptable, and open to what works best for each individual and team. This enables us to understand individual employees' unique learning needs, enabling us to tailor personalised programs that encompass diverse learning options such as online courses, workshops, mentoring, and gamified experiences, catering to diverse learning styles. As a result, our employees are energized to pursue their passions, contributing their best selves to the workplace. Measuring the quality of work, internal movements, employee retention, patents, and innovation, along with engagement pulse assessments, allows us to gauge the effectiveness of our programs. When it comes to addressing the challenge of retaining talent, it's essential for HR leaders to consider a holistic approach. 


Vector databases: Shiny object syndrome and the case of a missing unicorn

What’s up with vector databases, anyway? They’re all about information retrieval, but let’s be real, that’s nothing new, even though it may feel like it with all the hype around it. We’ve got SQL databases, NoSQL databases, full-text search apps and vector libraries already tackling that job. Sure, vector databases offer semantic retrieval, which is great, but SQL databases like Singlestore and Postgres (with the pgvector extension) can handle semantic retrieval too, all while providing standard DB features like ACID. Full-text search applications like Apache Solr, Elasticsearch and OpenSearch also rock the vector search scene, along with search products like Coveo, and bring some serious text-processing capabilities for hybrid searching. But here’s the thing about vector databases: They’re kind of stuck in the middle. ... It wasn’t that early either — Weaviate, Vespa and Mivlus were already around with their vector DB offerings, and Elasticsearch, OpenSearch and Solr were ready around the same time. When technology isn’t your differentiator, opt for hype. Pinecone’s $100 million Series B funding was led by Andreessen Horowitz, which in many ways is living by the playbook it created for the boom times in tech.


The Role of Quantum Computing in Data Science

Despite its potential, the transition to quantum computing presents several significant challenges to overcome. Quantum computers are highly sensitive to their environment, with qubit states easily disturbed by external influences – a problem known as quantum decoherence. This sensitivity requires that quantum computers be kept in highly controlled conditions, which can be expensive and technologically demanding. Moreover, concerns about the future cost implications of quantum computing on software and services are emerging. Ultimately, the prices will be sky-high, and we might be forced to search for AWS alternatives, especially if they raise their prices due to the introduction of quantum features, as it’s the case with Microsoft banking everything on AI. This raises the question of how quantum computing will alter the prices and features of both consumer and enterprise software and services, further highlighting the need for a careful balance between innovation and accessibility. There’s also a steep learning curve for data scientists to adapt to quantum computing.


AI-Driven API and Microservice Architecture Design for Cloud

Implementing AI-based continuous optimization for APIs and microservices in Azure involves using artificial intelligence to dynamically improve performance, efficiency, and user experience over time. Here's how you can achieve continuous optimization with AI in Azure:Performance monitoring: Implement AI-powered monitoring tools to continuously track key performance metrics such as response times, error rates, and resource utilization for APIs and microservices in real time. Automated tuning: Utilize machine learning algorithms to analyze performance data and automatically adjust configuration settings, such as resource allocation, caching strategies, or database queries, to optimize performance. Dynamic scaling: Leverage AI-driven scaling mechanisms to adjust the number of instances hosting APIs and microservices based on real-time demand and predicted workload trends, ensuring efficient resource allocation and responsiveness. Cost optimization: Use AI algorithms to analyze cost patterns and resource utilization data to identify opportunities for cost savings, such as optimizing resource allocation, implementing serverless architectures, or leveraging reserved instances.


4 ways AI is contributing to bias in the workplace

Generative AI tools are often used to screen and rank candidates, create resumes and cover letters, and summarize several files simultaneously. But AIs are only as good as the data they're trained on. GPT-3.5 was trained on massive amounts of widely available information online, including books, articles, and social media. Access to this online data will inevitably reflect societal inequities and historical biases, as shown in the training data, which the AI bot inherits and replicates to some degree. No one using AI should assume these tools are inherently objective because they're trained on large amounts of data from different sources. While generative AI bots can be useful, we should not underestimate the risk of bias in an automated hiring process -- and that reality is crucial for recruiters, HR professionals, and managers. Another study found racial bias is present in facial-recognition technologies that show lower accuracy rates for dark-skinned individuals. Something as simple as data for demographic distributions in ZIP codes being used to train AI models, for example, can result in decisions that disproportionately affect people from certain racial backgrounds.



Quote for the day:

"The most common way people give up their power is by thinking they don't have any." -- Alice Walker

Daily Tech Digest - March 17, 2024

Generative AI will drive a foundational shift for companies — IDC

“Over the last year, most organizations debated creating Chief AI Officers and centers of excellence to decide how to embed AI and create new business centers for new AI-enabled products and services,” said Rick Villars, group vice president of IDC’s Worldwide Research division. CIOs are also rethinking their capital investment plans and staffing needs based on AI initiatives, according to Villars, including how AI will affect an organization’s long-term revenue and profitability. Most organizations are likely to choose a hybrid approach to building out their AI plans — that is, companies will partner with service providers while also customizing existing AI platforms such as ChatGPT, as well as building their own proprietary, but smaller, AI models for specific use cases. “All applications you buy will become more intelligent. ... Phil Carter, group vice president of IDC’s Worldwide Thought Leadership Research, said organizations shouldn’t expect an immediate ROI from their investments. Like other major economic shifts, such as arrival of the tractors for farming, the arrival of genAI technology can take decades to achieve widespread adoption and ROI.


Blockchain in trademark and brand protection, explained

Through the use of blockchain technology, firms are able to generate irreversible documentation of product legitimacy. It is possible to provide each product with a unique identification number that allows retailers and customers to instantly confirm its legitimacy. In addition to shielding customers against fake items, this also helps firms preserve their goodwill, ensure data integrity, and win over new customers. Additionally, supply chains benefit from the transparency and traceability that blockchain offers, allowing firms to monitor the flow of goods from manufacturing to distribution. Businesses can use blockchain technology to confirm the legitimacy of products and spot any illegal or fake goods that are trading in the market. ... it might be difficult and expensive to integrate blockchain technology with current systems and procedures. To apply blockchain efficiently, firms might need to redesign their infrastructure and make considerable investments in new technology and knowledge. This can be a major hurdle, particularly for smaller companies with tighter budgets. The implementation of blockchain in brand protection is further complicated by problems with scalability and interoperability.


Open source is not insecure

It’s too easy whenever there is a major vulnerability to malign the overall state of open source security. In fact, many of these highest profile vulnerabilities show the power of open source security. Log4shell, for example, was the worst-case scenario for an OSS vulnerability at a scale and visibility level—this was one of the most widely used libraries in one of the most widely used programming languages. (Log4j was even running on the Mars rover. Technically this was the first intergalactic OSS vulnerability!) The Log4shell vulnerability was trivial to exploit, incredibly widespread, and seriously consequential. The maintainers were able to patch it and roll it out in a matter of days. It was a major win for open source security response at the maintainer level, not a failure. ... But today, most software consumption is occurring outside of distributions. The programming language package managers themselves—npm (JavaScript), pip (Python), Ruby Gems (Ruby), composer (PHP)—look and feel like Linux distribution package managers, but they work a little differently. They basically offer zero curation—anyone can upload a package and mimic a language maintainer.


AI is keeping GitHub chief legal officer Shelley McKinley busy

“I would say that AI is taking up [a lot of] my time — that includes things like ‘how do we develop and ship AI products,’ and ‘how do we engage in the AI discussions that are going on from a policy perspective?,’ as well as ‘how do we think about AI as it comes onto our platform?’,” McKinley said. The advance of AI has also been heavily dependent on open source, with collaboration and shared data pivotal to some of the most preeminent AI systems today — this is perhaps best exemplified by the generative AI poster child OpenAI, which began with a strong open-source foundation before abandoning those roots for a more proprietary play ... “Regulators, policymakers, lawyers… are not technologists,” McKinley said. “And one of the most important things that I’ve personally been involved with over the past year, is going out and helping to educate people on how the products work. People just need a better understanding of what’s going on, so that they can think about these issues and come to the right conclusions in terms of how to implement regulation.” At the heart of the concerns was that the regulations would create legal liability for open source “general purpose AI systems,” which are built on models capable of handling a multitude of different tasks.


Is OpenAI Opening Up To Quantum?

It’s likely that the potential for quantum to solve certain computational tasks critical to OpenAI’s growth is one reason for the quantum feelers, as it were. First, as AI models become more sophisticated, the computational resources required to train them have skyrocketed. Quantum computing offers a potential solution to this bottleneck, promising speed-ups for specific types of computations, including those involved in machine learning and optimization problems. Quantum computers could one day — relying on superposition and entanglement — process vast amounts of data in ways that classical computers will struggle to manage and — again, eventually — use far less economic and environmental resources. ChatGPT CEO Sam Altman recently made headlines for reports that he was seeking $7 trillion to make chips, apparently to feed this massive need for speed and processing power. He’s since said the reports on that figure were inaccurate, but the move still underscores OpenAI’s computational dilemma — grow, but reduce costs and improve performance. In a sentence, then, the potential integration of quantum computing with AI could boost model efficiency.


Flexera 2024 State of the Cloud Reveals Spending as the Top Challenge of Cloud Computing

“This is a complex year for cloud adoption. Organizations are navigating economic uncertainties by investing in generative AI, security, and sustainability while prioritizing cost management,” said Brian Adler, Senior Director, Cloud Market Strategy at Flexera. He further added “Cloud adoption continues to grow. The shift toward hybrid and multi-cloud environments underscores the importance of comprehensive cost management, with nearly half of all workloads and data now in the public cloud. FinOps practices and cloud centers of excellence are growing as companies move toward centralized, strategic cloud management.” The report also shows an increase in multi-cloud usage, increasing to 89% from 87% last year. Sixty-one percent of large enterprises use multi-cloud security, and 57% use multi-cloud FinOps as cost optimization tools. Organizations are taking a centralized approach to cloud with 63% of organizations already having a cloud center of excellence (CCOE) and 14% planning on creating one within the next year. Sustainability has been high on the priority list of organizations.


Cloud CISO Perspectives: Easing the psychological burden of leadership

CISOs are the public face of an organization’s security team, and they sit at the nexus of the security experts, engineers, and developers who report to them, the organization’s security policies, and the executives and board of directors who they report to. They often are blamed for security breaches that occur on their watch, and yet CISOs are not fleeing their jobs — recent data suggests that, despite the stress of the role, they stay at their employer for more than four and a half years at a time. While a CISO who has stayed with one company for five years has clearly demonstrated their dedication to defending their organization’s data and supporting its security teams, it doesn’t mean that they’re happy. High-profile data breaches are on the rise, and government agencies are imposing stricter regulatory requirements including increasing levels of legal accountability (and even personal liability) for their organization’s cybersecurity posture. The stresses CISOs contend with can take a psychological toll, lead to poor decision-making, and even burnout. 


Tech Transformation in Food Technology with AI

AI-driven predictive analytics offer crop management assistance. AI employs historical data, weather patterns, and soil conditions to detect crop yield forecasts, optimal planting times, and potential disease outbreaks. This proactive approach allows farmers to implement preventive measures, adjust farming practices, and mitigate risks, ultimately improving crop quality and quantity. ... Automation is crucial in streamlining food processing operations. AI-powered robotics and machine learning systems automate repetitive tasks such as sorting, grading, and packaging, enhancing efficiency, consistency, and speed. This reduces labour costs and minimises human errors, ensuring uniform product quality and meeting stringent industry standards and consumer expectations. ... AI technologies optimise every aspect of the food supply chain, from farm to fork. AI algorithms optimise logistics by analysing data on transportation, inventory management, and consumer preferences. They minimise transportation costs and reduce food wastage. Real-time monitoring and predictive analytics enable proactive decision-making, ensuring timely delivery and optimal utilisation of resources.


Modernizing Data Management with Karen Lopez

“One thing I’ve found working in the data industry is that there’s always something new coming over the horizon,” Lopez began. “Even so, we can still find ourselves suffering from the same struggles I was working on 35 years ago.” However, she pointed out, although relational databases were the core of everything until about 10 years ago, at that time there was an explosion of other types of databases and data stores -- a fact that makes the addition of the word “modern” much more meaningful than it otherwise might have been. “There are just so many more opportunities for new approaches to data management now,” she added. “I’m usually more of a skeptic when I see ‘modern’ in front of anything,” Lopez said. “There are certain standards, principles, and practices that work even in this new environment. It usually takes someone with a lot of hard-won experience to be able to tell whether one of these new systems or tools is trustworthy. Some of these things may be really exciting, but they just don’t catch on. For example, maybe they’re not scalable or they don’t meet the cost-benefit test -- there are plenty of reasons.”


Navigating Application Security in the AI Era

AI-generated code and organization-specific AI models have quickly become important parts of corporate IP. This begs the question: Can compliance protocols keep up? AI-generated code is typically created by puzzling together multiple pieces of code found in publicly available code stores. However, issues arise when AI-generated code pulls these pieces from open source libraries with license types that are incompatible with an organization’s intended use. Without regulation or oversight, this type of “non-compliant” code based on un-vetted data can jeopardize intellectual property and sensitive information. Malicious reconnaissance tools could automatically extract the corporate information shared with any given AI model, or developers may share code with AI assistants without realizing they’ve unintentionally revealed sensitive information. ... AI can be used to deliberately create malicious, difficult-to-detect code and insert it into open-source projects. AI-driven attacks are often vastly different than what human hackers would create – and different from what most security protocols are designed to protect, allowing them to evade detection. 



Quote for the day:

"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr

Daily Tech Digest - March 16, 2024

New knowledge base compiles Microsoft Configuration Manager attack techniques

“As with most 30-year-old technologies, Configuration Manager was not designed with modern security considerations,” the SpecterOps researchers said in a blog post announcing the new resource. “Many of its default configurations enable various components of its attack surface. Couple that with the inherent challenges of Active Directory environments and you have a massive attack surface suffering from a combined 55 years of technical debt.” The researchers claim they’ve encountered Configuration Manager deployments in almost every Active Directory environment they’ve investigated, a testament to the utility and popularity of the platform which allows admins to deploy applications, software updates, operating systems and compliance settings on a wide scale to servers and workstations. ... One of the most common insecure configurations for Configuration Manager encountered by SpecterOps are overprivileged network access accounts, which is one of the many accounts that SCCM uses for its various tasks. “We (very) commonly find the network access account to be configured as the client push installation account (local admin on all clients), SCCM Administrator, or even domain administrator,” the researchers said.


The IaC Weight on DevOps’ Shoulders

On the one hand, distributing the IaC load lessens the burden on the DevOps teams, but the downside is that it becomes difficult to understand which resources are actually in use and which have been temporarily created for testing purposes. With many owners creating resources on demand, once they are no longer needed, these leftovers create confusion around dependencies and make cloud platforms disorganized and difficult to maintain. Just like enabling more hands to touch IaC creates greater sprawl and disorder, more users with less governance invite careless sprawl in terms of costs as well. This often results in duplicate and unused resources accumulating, wasting budgets that are currently tight, and every penny counts. With a lack of automation and oversight, environments grow messy and expensive. The sprawl issues can also impact security, as expanding permissions raises valid security concerns that are intensified when clouds become disorganized and difficult to maintain. Well-intentioned developers may misconfigure resources or expose sensitive systems, and without proper methods to manage drift or misconfiguration, this can pose real risks to organizations and systems. Another important aspect that also increases with less oversight is intentional insider risk.


How Observability Is Different for Web3 Apps

Many blockchain networks impose a fee for every transaction relayed over the network and successfully written to the blockchain. On the Ethereum network, for example, this fee is known as gas. As a result, it is critical that you not only monitor the functionality of your Web3 dApp but also pay close attention to the economic efficiency of it. Transactions that are unnecessarily large or too many transactions increase the cost of running your Web3 dApp. ... Decentralized applications rely heavily on smart contracts. A smart contract refers to a self-executing program deployed on a blockchain and executed by the nodes that run the network. Web3 dApps depend upon smart contracts for their operations. They serve as the “backend logic” of the dApp, running on the “server” (blockchain network). The operations executed by a smart contract often incur transaction fees. These fees are used to compensate the nodes that run the blockchain network for the computational power they provide to run the smart contract code. Additionally, smart contracts often handle sensitive operations like releasing or receiving funds in the form of cryptocurrency. 


10 Cloud Security Best Practices 2024: Expert Advice

Digital supply chain security must be at the top of every company’s agenda as organizations increasingly work with third and fourth parties to drive innovation, said Nataraj Nagaratnam, IBM Fellow and CTO for Cloud Security at IBM. Modern enterprises require a vast array of hybrid and multi-cloud environments to support data storage and applications, he said. While industry cloud platforms with built-in security and controls are already helping enterprises within regulated industries de-risk the digital supply chain, including protecting banks and the vendors they transact with, organizations will need to continue to be diligent. Cloud security services can help reduce risk and enhance the compliance of cloud environments. He told Techopedia: “Enterprises must take a holistic approach to their hybrid cloud cybersecurity strategies by adopting risk management solutions that can help them gain visibility into third- and fourth-party risk posture while achieving continuous compliance.” Enterprise technology analyst David Linthicum added that it’s important for companies to vet and monitor third-party cloud service providers to ensure they meet security standards and align with the organizations’ requirements.


Data Governance Coaching: A Newcomer's Journey As A Data Manager

Companies are increasingly recognizing the importance of reliable data for informed decision-making. At the heart of this transformation are individuals like me, new data managers tasked with overseeing specific data domains within the enterprise. The foundational element of this data-driven shift lies in the role concept, a framework that identifies and nominates data managers based on their skills, knowledge, and passion for data. Despite their different expertise and company affiliations, this group has a common goal – to ensure high-quality data within their respective responsibility areas. Tackling an initial use case within our data domain is crucial to embark on this journey successfully. ... The narrative of a data manager’s journey in a forward-thinking company emphasizes continuous growth through data governance coaching. A comprehensive approach, including training, use case implementation, and ongoing support, is successfully operationalizing data managers. Past insights stress the importance of the close link between business processes and data management, the seamless identification of data managers, the operational-level conceptualization, and the recognition of varied data domains. 


Building a Sustainable Data Ecosystem

While data sharing is essential for advancing generative AI technology, it also presents significant challenges, particularly regarding privacy, security, and ethical use of data. As generative AI models become increasingly sophisticated, concerns about potential misuse, unauthorized access, and infringement of individual rights have grown. Developing sustainable policy frameworks is crucial to address these challenges and ensure that generative AI technology is deployed responsibly and ethically. Effective policies can establish guidelines and standards for data-sharing practices, promote transparency and accountability, and mitigate risks associated with privacy violations and misuse of generated content. Moreover, robust policy frameworks can foster stakeholder trust, encourage collaboration, and contribute to generative AI technology's long-term sustainability and advancement. Generative AI is a subset of artificial intelligence focused on creating new content that mimics or resembles human-generated content, such as images, text, or sound. This is achieved through machine learning techniques, including deep learning algorithms such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and transformers.


Why Are There Fewer Women Than Men in Cybersecurity?

The tech industry, including cybersecurity, has been rightly criticized for its "bro culture," which can be unwelcoming and even hostile to women. This culture is characterized by practices and attitudes that devalue women's contributions, overlook them for promotions and challenging projects, and subject them to harassment and discrimination. The recent surge in employee population growth from other cultures, many of which are used to the devaluation of women outside of the workforce, doesn’t translate well or do anything reformative. Such an environment not only discourages women from remaining in the field but also dissuades others from entering it. The underrepresentation of women in cybersecurity is also self-perpetuating due to the lack of visible female role models in the field. Women considering a career in cybersecurity often find few examples of successful female professionals to inspire them. This lack of visibility contributes to the misconception that cybersecurity is not a viable or welcoming career path for women. The absence of female mentors and role models means that aspiring women in cybersecurity lack guidance, support and networking opportunities that are crucial for career development and advancement in any and all fields.


Answers for the IT Skills Gap

One effective strategy is to deploy autonomous automation into your enterprise storage infrastructure, so it reduces the level of complexity, thereby decreasing the dependence on specialized IT skills that are becoming harder to find. With the power of autonomous automation, an admin can manage petabytes of storage easily and cost effectively. ... A complementary strategy is to automate the technical support process through Artificial Intelligence for IT Operations (AIOps). AIOps supports scalable, multi-petabyte storage-as-a-service (STaaS) solutions, enabling enterprises to simplify and centralize IT operations and improve cost management. ... A third strategy for shortening the gap is through storage consolidation. We have a $20 billion enterprise customer that went from 27 storage arrays from three different vendors to only four arrays. A Fortune 100 customer dramatically reduced their storage infrastructure, going from 450 floor tiles to only 50 floor tiles running all the same applications and workloads. This consolidation had many benefits, but one of the key ones was reducing the need for IT manpower. You don’t need such high-level skills with years of experience when the need for IT resources has been streamlined.


6 CISO Takeaways From the NSA's Zero-Trust Guidance

After tackling any other fundamental pillars, companies should look kick off their foray into the Network and Environment pillar by segmenting their networks — perhaps broadly at first, but with increasing granularity. Major functional areas include business-to-business (B2B) segments, consumer-facing (B2C) segments, operational technology such as IoT, point-of-sale networks, and development networks. After segmenting the network at a high level, companies should aim to further refine the segments, Rubrik's Mestrovich says. "If you can define these functional areas of operation, then you can begin to segment the network so that authenticated entities in any one of these areas don't have access without going through additional authentication exercises to any other areas," he says. "In many regards, you will find that it is highly likely that users, devices, and workloads that operate in one area don't actually need any rights to operate or resources in other areas." Zero-trust networking requires companies to have the ability to quickly react to potential attacks, making software-defined networking (SDN) a key approach to not only pursuing microsegmentation but also to lock down the network during a potential compromise.


The Role of Enterprise Architecture in Business Transformation

In the context of strategy management, tools such as strategic roadmaps and business model canvases can support in planning and communicating the business objectives of your organization. To put the strategy into execution, businesses need to organize their resources – people, process, information and technologies – into a composable set of capabilities. These are usually documented in the form of a business capability map. To provide an overview of the available and required resources, portfolios such as process portfolio, application portfolio management, data catalogue and technology radar need to be in place. One or more capabilities are described in operating models. Here, organizations define how the elements of the portfolio are connected to realize the said capabilities. By analysing capability maturity, data quality, and technology fitness, strategic gaps are identified and roadmaps for implementation and transformation are specified to close these gaps. ... EA can serve many initiatives and therefore many stakeholders in your organization. However, no matter how convenient and simple EA can be, we cannot expect everyone to be familiar with every aspect of EA, nor with the modeling languages that are used to implement it.



Quote for the day:

"Leadership means forming a team and working toward common objectives that are tied to time, metrics, and resources." -- Russel Honore

Daily Tech Digest - March 15, 2024

AI hallucination mitigation: two brains are better than one

LLMs have been characterized as stochastic parrots — as they get larger, they become more random in their conjectural or random answers. These “next-word prediction engines” continue parroting what they’ve been taught, but without a logic framework. One method of reducing hallucinations and other genAI-related errors is Retrieval Augmented Generation or “RAG” — a method of creating a more customized genAI model that enables more accurate and specific responses to queries. But RAG doesn’t clean up the genAI mess because there are still no logical rules for its reasoning. In other words, genAI’s natural language processing has no transparent rules of inference for reliable conclusions (outputs). What’s needed, some argue, is a “formal language” or a sequence of statements — rules or guardrails — to ensure reliable conclusions at each step of the way toward the final answer genAI provides. Natural language processing, absent a formal system for precise semantics, produces meanings that are subjective and lack a solid foundation. But with monitoring and evaluation, genAI can produce vastly more accurate responses.


The Courtroom Factor in GenAI’s Future

There are a lot of moving parts. You kind of hit that on the head. Certainly, every day there’s something new, some development, but let me focus on my area of expertise, which is litigation and where I see some of the domestic generative AI litigation perhaps trending or where I think we’re going to see an increase in litigation going forward. I think that’s going to be twofold. I think you’re going to continue to see the intellectual property issues attended to generative AI litigated. I think that’s one area that’s inevitable. I think the other area that we’re really going to start to see, and we already are seeing an uptick in litigation, is in the use and deployment of generative AI by companies. Let me frame it this way. As companies attempt to take advantage of the promise of generative AI, they’re going to, they already have, and they will continue to deploy generative AI tools, and generative AI system, more advanced systems in terms of machine learning, and generative aspects of AI in their businesses. I think we’ll see a steady increase in use -- and some folks would say misuse -- of AI. It’s trickling out where plaintiffs allege that the business or the entity has done something wrong using AI. 


Next-Gen DevOps: Integrate AI for Enhanced Workflow Automation

In DevOps, the ability to anticipate and prevent outages can mean the difference between success and catastrophic failure. In such situations, AI-powered predictive analytics can empower teams to stay one step ahead of potential disruptions. Predictive analytics uses advanced algorithms and machine learning models to analyze vast amounts of data from various sources, such as application logs, system metrics, and historical incident reports. It then identifies patterns, correlations, and detects anomalies within this data to provide early warnings of impending system failures or performance degradation. This enables teams to take proactive measures before issues escalate into full-blown outages. ... Doing things by hand introduces the possibility of human error and is way too time-intensive — so it comes as no surprise that the industry is turning toward automation. Tools that utilize artificial intelligence can identify potential issues by analyzing code repositories at speeds that cannot be replicated by humans. On the ground level, this means that various potential issues — bottlenecks in terms of performance, code that doesn’t meet best practices or internal standards, security liabilities and code smells — can be identified quickly and at scale.


Key MITRE ATT&CK techniques used by cyber attackers

Half of the top threats are ransomware precursors that could lead to a ransomware infection if left unchecked, with ransomware continuing to have a major impact on businesses. Despite a wave of new software vulnerabilities, humans remained the primary vulnerability that adversaries took advantage of in 2023, comprising identities to access cloud service APIs, execute payroll fraud with email forwarding rules, launch ransomware attacks, and more. As organizations migrate to the cloud and rely on a growing array of SaaS applications to manage and access sensitive information, identities are the ties that bind all these systems together. Adversaries have quickly learned that these systems house the information they want and that valid and authorized identities are the most expedient and reliable way into those systems. Researchers noted several broader trends impacting the threat landscape, such as the emergence of generative AI, the continued prominence of remote monitoring and management (RMM) tool abuse, the prevalence of web-based payload delivery like SEO poisoning and malvertising, the increasing necessity of MFA evasion techniques, and the dominance of brazen but highly effective social engineering schemes such as help desk phishing.


Data management trends: GenAI, governance and lakehouses

Nearly every major database and data platform vendor had some form of generative AI news in 2023. Some vendors included generative AI as a tool to act as an assistant, helping users to conduct different tasks. Managing data platforms and writing different types of data queries has long been a complicated exercise and generative AI simplifies it. Among the many vendors that integrated some form of AI assistant, Dremio launched its Text-to-SQL AI-powered tool in June, which enables users to generate SQL queries more easily. In August, Couchbase announced Capella iQ, a generative AI tool that helps developers write database application code. Also in August, SnapLogic rolled out its SnapGPT AI tool to help users build data pipelines using natural language. ... Whether it's for AI, data operations or analytics, the topic of data governance is increasingly important. Being able to understand where data comes from, how to make it available and use it is important for security, privacy, accuracy and reliability. Over the course of 2023, multiple vendors expanded and enhanced data governance capabilities to help manage data.


The importance of "always-ready" data

Imagine living in a world where data is prepared on an ongoing basis – that is, data prepared so quickly, regardless of the amount, that it is always ready. Such a reality would enable enterprises to respond promptly to evolving business needs and unexpected challenges. Moreover, it would minimize backlogs of tickets and requests, granting data engineers time to be more proactive and productive. One way to facilitate this is through the use of a cloud data lakehouse. With it, data can be prepared directly on cloud storage, without the long load times that ETL- or ELT-based (extract, load, and transform) data processing typically takes. For enterprises that manage complicated and data-heavy workloads, the result is game-changing on multiple fronts. Agile data infrastructure underscored by superior cost performance will give enterprises an efficient means of adapting to changing market dynamics, new projects, and fluctuating customer demands. Beyond the flexibility it grants data engineers, always-ready data also empowers them to conduct ad-hoc queries and analytics as a way to derive actionable insights and predictions on the fly. 


AI is embedded in everything that we do

AI is embedded in everything that we do and it is becoming visible in every aspect of software development and operations. Impact of AI in DevOps can be felt through efficiency and speed (of SW development and delivery), automation in testing, security (real time alerts) and optimization of cloud resources. Tools such as Pilot, Code Whisperer have reduced the time it takes to create business logic and propagation to production environment is swift, allowing the team to produce digital assets quickly. AI helps in automating CI/CD pipeline. By leveraging AI-powered monitoring and management tools, DevOps teams can automate routine tasks, predict performance issues, retract errors quickly, and optimize resource utilization across diverse cloud platforms. AI-driven solutions help DevOps teams to dynamically allocate resources, detect anomalies, and enforce compliance across multi-cloud deployments. Thus, DevOps teams are in a better position to get actionable insights and have intelligent decision-making capabilities in multi-cloud environment. AI technologies can help build automated workflows and improve collaboration and experiment tracking. 


Why public cloud providers are cutting egress fees

This customer discontent is not lost on cloud providers, who are initiating a significant shift in their pricing strategies by reducing these charges. Google Cloud announced it would eliminate egress fees, a strategic move to attract customers from its larger competitors, AWS and Microsoft. This was not merely a pricing play but also a response to regulatory pressures, greater competition, and the significantly lower cost of hardware in the past several years. The cloud computing landscape has changed, and providers are continually looking for ways to differentiate themselves and attract more users. Today the competition is not only other public cloud providers but managed service providers (MSPs) and regional cloud services. Microclouds are also emerging, driven mainly by generative AI and the need to find more cost-effective cloud alternatives for using GPU-powered systems on demand. Changing governmental policies and market demand also put pressure on providers to remove or reduce these fees. The best example is the European Data Act, which is aimed at fostering competition by making it easier for customers to switch providers.


Redefining multifactor authentication: Why we need passkeys

Authenticator apps, designed to provide a second layer of security beyond traditional passwords, have been lauded for their simplicity and added security. However, they are not without flaws. One significant issue is MFA fatigue, a phenomenon where users, overwhelmed by frequent authentication requests or simply following a single password spray attack, inadvertently grant access to attackers. Additionally, attacker-in-the-middle (AiTM) techniques such as Evilginx2 exploit the communication between the user and the service, bypassing the newer code-matching experience provided by modern authenticator apps. ... IP fencing may have a role in restricting privileged IT accounts as a fourth factor of authentication (after password, authenticator app, and device) for privileged IT accounts, but it does not scale to regular users because of the advent of privacy features in operating systems like Apple’s iOS (beginning in version 15) make IP fencing unrealistic since all connections are shielded behind Cloudflare. Security operations center (SOC) analysts struggle to identify these connections if the identity system is not designed to authenticate both the user and the device.


As Attackers Refine Tactics, 'Speed Matters,' Experts Warn

Experts regularly recommend keeping abreast of tactics used by groups such as Scattered Spider and reviewing defenses to ensure they can cope. "Thwarting Muddled Libra requires interweaving tight security controls, diligent awareness training and vigilant monitoring," Unit 42 said in a blog post. The researchers particularly recommend having baselines of typical activity and configurations, especially to spot unexpected changes in infrastructure, dormant accounts becoming active, a sharp increase in remote management tool usage, a sudden surge in multifactor authentication push requests, or the sudden appearance of red-team tools in the environment. "If you see red-teaming tools in your environment, make sure there is an authorized red-team engagement underway," Unit 42 said. "One SOC we worked with had a company logo sticker on the wall for each red team they'd caught." Some effective defenses involve a heavy dose of process and procedure, rather than just technology. Especially with MFA and someone who appears to have lost their phone and is trying to reenroll, which shouldn't happen often, "put additional scrutiny on changes to high-privileged accounts," Unit 42 said.



Quote for the day:

"Good things come to people who wait, but better things come to those who go out and get them. " -- Anonymous