Daily Tech Digest - March 22, 2024

Why adversarial AI is the cyber threat no one sees coming

Don’t settle for doing red teaming on a sporadic schedule, or worse, only when an attack triggers a renewed sense of urgency and vigilance. Red teaming needs to be part of the DNA of any DevSecOps supporting MLOps from now on. The goal is to preemptively identify system and any pipeline weaknesses and work to prioritize and harden any attack vectors that surface as part of MLOps’ System Development Lifecycle (SDLC) workflows. ... Have a member of the DevSecOps team staying current on the many defensive frameworks available today. Knowing which one best fits an organization’s goals can help secure MLOps, saving time and securing the broader SDLC and CI/CD pipeline in the process. Examples include the NIST AI Risk Management Framework and OWASP AI Security and Privacy Guide​​. ... Consider using a combination of biometrics modalities, including facial recognition, fingerprint scanning, and voice recognition, combined with passwordless access technologies to secure systems used across MLOps. Gen AI has proven capable of helping produce synthetic data. 


300K Internet Hosts at Risk for 'Devastating' Loop DoS Attack

The attack exploits a novel traffic-loop vulnerability present in certain user datagram protocol (UDP)-based applications, according to a post by the Carnegie Mellon University's CERT Coordination Center. An unauthenticated attacker can use maliciously crafted packets against a UDP-based vulnerable implementation of various application protocols such as DNS, NTP, and TFTP, leading to DoS and/or abuse of resources. ... The researchers put the attack on par with amplification attacks in the volumes of traffic they can cause, with two major differences. One is that attackers do not have to continuously send attack traffic due to the loop behavior, unless defenses terminate loops to shut down the self-repetitive nature of the attack. The other is that without a proper defense, the DoS attack will likely continue for a while. Indeed, DoS attacks are almost always about resource consumption in Web architecture, but until now it's been extremely tricky to use this type of attack to take a Web property completely offline because "you have to have systems smart enough to gather an army of hosts that will call upon the victim web architecture all at once," explains Jason Kent.


Security Flaw Can Open Over 3 Million Door Locks, Mainly at Hotels

The researchers have not released technical details to prevent hackers from exploiting the threat. Nevertheless, the vulnerability is relatively easy for a bad actor to abuse. “An attacker only needs to read one keycard from the property to perform the attack against any door in the property. This keycard can be from their own room, or even an expired keycard taken from the express checkout collection box,” they wrote. ... The vulnerability affects all locks under the Saflok brand, including the Saflok MT, the Quantum Series, the RT Series, the Saffire Series and the Confidant Series, among others. Unfortunately, it’s impossible for a hotel guest to visually tell if a lock has been patched, the researchers say. Whether anyone else knows about the flaw remains unclear. ... In a statement, Dormakaba confirmed that the flaw exists. "As soon as we were made aware of the vulnerability by a group of external security researchers, we initiated a comprehensive investigation, prioritized developing and rolling out a mitigation solution, and worked to communicate with customers systematically," the company said.


Quantum talk with magnetic disks

While much attention has been directed towards the computation of quantum information, the transduction of information within quantum networks is equally crucial in materializing the potential of this new technology. Addressing this need, a research team at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) is now introducing a new approach for transducing quantum information. The team has manipulated quantum bits, so-called qubits, by harnessing the magnetic field of magnons—wave-like excitations in a magnetic material—that occur within microscopic magnetic disks. The researchers have presented their results in the journal Science Advances. The construction of a programmable, universal quantum computer stands as one of the most challenging engineering and scientific endeavors of our time. The realization of such a computer holds great potential for diverse industry fields such as logistics, finance, and pharmaceutics. However, the construction of a practical quantum computer has been hindered by the intrinsic fragility of how the information is stored and processed in this technology. 


Relational Data at the Edge: How Cloudflare Operates Distributed PostgreSQL Clusters

The team opted for stolon, an open-source cluster management system written in Go, running as a thin layer on top of the PostgreSQL cluster, with a PostgreSQL-native interface and support for multiple-site redundancy. It is possible to deploy a single stolon cluster distributed across multiple PostgreSQL clusters, whether within one region or spanning multiple regions. Stolon's features include stable failover with minimal false positives, with the Keeper Nodes acting as the parent processes managing PostgreSQL changes. Sentinel Nodes function as orchestrators, monitoring Postgres components' health and making decisions, such as initiating elections for new primaries. ... Cloudflare chose PgBouncer as the connection pooler due to its compatibility with PostgreSQL protocol: the clients can connect to PgBouncer and submit queries as usual, simplifying the handling of database switches and failovers. PgBouncer, a lightweight single-process server, manages connections asynchronously, allowing it to handle more concurrent connections than PostgreSQL.


Downtime Cost of Cyberattacks and How to Reduce It

“IT leaders and other business decision makers must think critically about their support teams, identifying and encouraging continual upskilling via real-world scenarios to mirror the threats they’re likely to experience,” Hynes advises. "Staying skilled in parallel to increasingly complex and intelligent cyberattacks can make recovery more efficient and alleviate unnecessary downtime that puts the company reputation and stakeholder relationships at risk.” This will often necessitate de-siloing an organization. As one paper observes, cybersecurity analysts are sometimes not looped into continuity plans, making those plans next to worthless when something actually happens. Conversely, analysts do not necessarily share the findings of their risk assessments with the necessary departments. So, nobody can plan accordingly. As previously referenced, planning for alternate means of communication, whether it be in a hospital or in another business, is crucial. Ensuring that an immediate fallback to typical communication channels is in place will almost assuredly save time in the event of an attack.


Are cobots collaborative enough to protect your cyberspace?

Critics argue that the rise of collaborative robots in modern manufacturing brings about cybersecurity concerns. These sophisticated machines, integrated with advanced sensors and AI, work alongside humans in shared spaces, posing risks that must be addressed. Unauthorised access to sensitive data in cobots can lead to intellectual property theft and operational disruptions, while cyber attackers manipulating cobot programming can cause product and equipment damage and physical harm to workers. “Moreover, disabling safety mechanisms through cyber attacks exposes workers to injury risks. To safeguard human labour in collaborative environments, comprehensive cybersecurity strategies are imperative. This entails regular software updates, encryption methods, and continuous monitoring for swift responses to potential breaches,” Vineet Kumar, global president and founder, CyberPeace, a non-profit organisation of cybersecurity explained. Experts believe that the firmware, controlling lower-level operations such as sensors and actuators, is often updated over the air, leaving robots vulnerable to penetration through the network or its peripherals.


Legal Issues for Data Professionals: AI Creates Hidden Data and IP Legal Problems

From a legal perspective, the core risks are a) that analytics run on the database will disclose customer information in violation of the confidentiality agreement and b) that the use of the customer information could be outside the scope of permitted use. It is common for confidentiality and nondisclosure obligations to be integrated into the governing agreement and not in a standalone nondisclosure agreement (NDA). Further, in many industries, the terms of customer confidentiality will be tailored specifically to the transaction and the agreement. As a result, it requires both business and legal analysis to determine the permitted, the prohibited, and the gray areas for scope of use. It is important to note that the company and the customer entered into an NDA at the beginning of the transaction or before the transaction. The NDA may have different terms than the final agreement, but both the NDA and the agreement, with their different terms, will be in the database. In addition, a company’s use of confidential information outside its permitted scope of use constitutes a breach of contract and could result in liability and the award of monetary damages against the company. 


CDOs, data science heads to fill Chief AI Officer positions in India

The refactoring of C-level technology roles across Indian enterprises, according to CK Birla Hospitals’ CIO Mitali Biswas, can be chalked up to the dearth of talent or skills presently available to take on the responsibilities for the role or create an efficient team under that position. “While larger enterprises may still want to create a new position and a team around it, small and medium businesses will look up to their existing technology leaders, such as the CIO or the CTO or the CDO to take up the CAIO mantle,” Biswas explained, adding that maturity and pervasiveness of the CAIO role, at least in the Indian healthcare sector, is two to three years away. Santanu Ganguly, who is the CEO of advisory firm StrategINK, said he believes that other industry sectors, including healthcare, will see the role of CAIO being adopted in the next one to three years, driven by the boards and CEOs’ agenda of shaping the future of customer-centricity, offering innovation, enhanced productivity & efficient operations. Along the same lines, Gaurav Kataria, vice president of digital manufacturing and CDIO at PSPD, ITC Limited said that the evolution of the CAIO role is already happening in India.


The Changing Face of Consumption and End User Experience

IT architecture has evolved through several distinct epochs that supported the evolution of business technology. This business technology shift is more than a “consumption gap … the idea that technology companies can add features and complexity to their products much faster than consumers can consume them.” Indeed, we have seen a trend that argues that increased technology results in improved flexibility and intuitive product use. As Steve Jobs said, “Simple can be harder than complex: You must work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains.” As business technology evolves, it delivers simplicity with more flexible consumption models that support building a more intuitive and contextual end-user experience. The architectural skills that support this evolution are also changing. This article discusses how those IT architecture skills are evolving. It suggests how the architecture toolkit for the future is also evolving so that we can continue to evolve technologies that “can make life easier … [we] touch the right people. [with] things [that] can profoundly influence life.”



Quote for the day:

"Holding on to the unchangeable past is a waste of energy, and serves no purpose in creating a better future." -- Unknown

Daily Tech Digest - March 21, 2024

India’s digital healthcare program promises to democratise healthcare for all

At an advanced level, AI-powered clinical decision support systems riding on the back of EHR systems could help physicians manage a much higher patient workload without compromising on the quality or accuracy of their diagnostics. This could be transformational for India’s universal healthcare goal given the sheer scarcity of qualified doctors in our country. Similarly, remote monitoring solutions integrated with EHR will have the capability to scrutinize health data generated by patients through wearables. This can facilitate the prompt identification of potential health issues, alerting healthcare providers and enabling timely interventions. Going further, the application of predictive analytics on anonymised aggregate-level data stands poised to significantly contribute to the identification and mitigation of large at-risk populations through proactive preventive measures. This sophisticated application paves the way for a healthcare strategy that is comprehensive and yet targeted at the most vulnerable communities. The combination of AI and EHR will open whole new possibilities. 


API-first observability for the API era

In order for the developer not to have to keep logs, dashboards, and alerts up-to-date with every new change to the app, there needs to be a basic level of production visibility that is both automatic and covers the entire system. ... If today’s logs and error alerts are too overwhelming, we’re going to need to move up the stack. The best candidate: APIs. Not only are APIs a functional boundary for the software, but they often demarcate an organizational boundary, the hand-off point between one team and another. Monitoring at the API level means identifying the issues that are most likely to impact others and matching the monitoring to the boundaries of responsibility. Getting comprehensive metrics across API endpoints allows teams to get ahead of alerts and quickly determine who may be responsible for investigations and fixes. ... Modern software exists as a collage across tech stacks and partial migrations, where only some of the software is paged in by the current members of the organization. In order for a monitoring approach to be comprehensive, it needs to easily work across programming languages and frameworks, without requiring changes to the underlying system.


Chasing the Tech in Architect

While much of our focus for the BTABoK originally was making a suite of techniques for architecture teams at the practice and strategy level, it is also extremely important that we provide reusable tools at the execution level and that these tools be taken into account and traceable to the more outcome, strategy and stakeholder side of the BTABoK. Only then will the knowledge properly connect all practitioners of architecture into a unified and effective practice. The BTABoK describes architecture, engineering/delivery as separate professions and trades. Architecture and engineering have overolapping responsibilities and competencies in certain areas but they remain separate. This policy best supports the individual value of both and evidence (surveys and interviews) suggest that while other configurations are possible, a program work is more successful if both professions are present. Meaning a lead engineer and a lead architect create better outcomes than when their is only one. However much of this is context sensitive and much more research is necessary in the space before we get conclusive recommendations.


Sustainable Data Management: An Overview

Optimizing data storage and cloud computing sustainability is crucial in shaping a greener digital future. To achieve this, several key steps can be taken. Firstly, organizations should prioritize the implementation of energy-efficient data centers. These centers can significantly reduce power consumption by utilizing advanced cooling techniques, virtualization technologies, and renewable energy sources. Secondly, adopting a comprehensive data management strategy is essential. This involves consolidating and organizing data to minimize redundancy and improve storage efficiency. ... The circular economy focuses on reducing waste and maximizing resource efficiency. In data management, this entails implementing strategies such as refurbishing and reusing outdated equipment, promoting recycling programs for electronic devices, and responsibly disposing of hazardous materials. By implementing these measures, organizations can not only minimize their ecological footprint but also reduce costs associated with purchasing new hardware and comply with regulations related to e-waste management.


New Gmail Security Rules—You Have 14 Days To Comply, Google Says

Google has been making it explicit since October 2023 that new email sender authentication rules will result in some messages to Gmail accounts being rejected and bounced back to the sender en masse. Neil Kumaran, a Google group product manager responsible for Gmail security and trust, announced that “starting in 2024, we’ll require bulk senders to authenticate their emails, allow for easy unsubscription and stay under a reported spam threshold.” Some of these new protections are scheduled to start in 14 days and will impact every holder of a personal Gmail account in a very positive way. ... Although Google does appear to be taking a slow and steady approach to the new rules for bulk email senders to Gmail accounts, you can expect things to start ramping up from April 1. “Starting in April 2024, we’ll begin rejecting non-compliant traffic,” Google has stated in an email sender guidelines FAQ, continuing, “we strongly recommend senders use the temporary failure enforcement period to make any changes required to become compliant.”


Supercomputing’s Future Is Green and Interconnected

Well, we are building a new machine with 96 GPUs, these will be the SXM5s, water-cooled NV-linked devices. We will know soon if they will have better performance. As I mentioned, they may be faster, but they may not be more efficient. But, one thing we found with our A100s was that most of the performance is available in the first half the wattage, so you get 90 percent of the performance in the first 225 Watts. So, one of the things that we’re going to try with the water-cooled system is to run it in power capped mode, and see what kind of performance we get. One nice thing about the water-cooled version is that it doesn’t need fans, because the fans count against your wattage. When these units are running, it’s about four kilowatts of power per three units of space (3U). So it’s like forty 100 watt light bulbs in a small box. Cooling that down requires blowing a tremendous amount of air across it, so you can have a few 100 watts of fans. And with water cooling, you just have a central pump, which means significant savings. The heat capacity of water is about 4000 times the heat capacity of air by volume, so you have to use a lot less of it. 

5 Ways CISOs Can Navigate Their New Business Role

CISOs can’t afford to not pay attention to their data breach liability: A breakdown from the firm of the top 35 breaches across the world in 2023 found that organizations paid almost $2.6 billion in fines for exposing 1.5 billion records, with almost half of the breaches happening at public agencies and healthcare-related industries. Among this list were breaches at many of the world's largest telecommunications providers. Out of the top 35 breaches, all but one happened in the European Union and US. ... Further, transparency should be a natural part of a CISO's playbook, not just something that is activated in post-breach situations. Part of the motivation is compliance, as Forrester analysts noted. "Regulators are pushing for greater transparency," they wrote. ... In general, CISOs need to "own it, recognize where things went wrong, and proactively work to fix them, including as many stakeholders as possible to ensure you fix the root cause and identify any other issues that may have been missed," Shier says. "This is especially true now that CISOs are increasingly being held personally accountable for issues that may arise from corporate negligence or security issues that were persistent, known, and not mitigated."

Invest in human capital to create a dynamic, resilient workforce fit for the future

Competition, changing demographics, and evolving skill requirements are some of the challenges in the insurance industry. At Ageas Federal, our 4G Employee Value Proposition (EVP) provides an opportunity for employees to be part of the transformation journey in the company and helps address the aforementioned retention challenges. Employees gain unmatched rewards through competitive remuneration, gratuity payouts, attractive incentives, and pre-defined increments. They also receive guaranteed unique benefits such as healthcare, overall wellbeing of themselves and their families, wellness programs, life cover, and various types of leaves and allowances. Employees add glory to their careers through recognition programs like star of the month, galaxy awards, and leadership awards, fostering a culture of excellence. Lastly, employees have opportunities to learn and grow through managerial development programs, structured career progression, and self-paced learning programs, ensuring continuous development and skill enhancement.


Importance of data privacy and security measures for secure digital learning

With the ever-increasing use of online platforms in education, encryption and secure communication protocols are essential for safeguarding confidential information shared in virtual classrooms, discussions and collaborative projects. The digital environment is ever-changing and cyber threats are constantly evolving. It is essential for educational institutions to remain vigilant, update security measures regularly and remain informed of new threats. Adapting to the ever-changing threat landscape is necessary to identify and address potential vulnerabilities in a timely manner. Therefore, at the end of the day, data protection and security in digital learning isn't just a technical thing, it's an ethical and strategic necessity. Schools that focus on these issues not only protect sensitive information, but they also create a culture of trust, responsibility and academic success. As technology changes the way we learn, a strong focus on data protection and security will be the foundation of a strong and safe digital learning environment. As we move into the digital era and incorporate technology into our educational systems, it is essential to recognise and prioritize the safeguarding of student data.


Email Bomb Attacks: Filling Up Inboxes and Servers Near You

The measures include implementing reCAPTCHA technology to determine if a human - or bot - is attempting to use a platform. "Email bombing bots are generally unable to bypass a reCAPTCHA, which would prevent them from signing up" for a registration or other service that might help facilitate a massive email bomb attack. Users should be trained to avoid using work email addresses to subscribe to nonwork-related services and limit their online exposure to direct email addresses by using contact forms that do not expose email addresses. "Given the potential implications of such an attack on the HPH sector, especially concerning unresponsive email addresses, downgraded network performance and potential downtime for servers, this type of attack remains relevant to all users," HHS HC3 said. "Email bomb attacks are potentially disruptive and can impact the HPH through denial of services where email is a critical part of the business or clinical workflow," said Dave Bailey, vice president of consulting services at security and privacy consultancy Clearwater. .



Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie

Daily Tech Digest - March 20, 2024

How to deploy software to Linux-based IoT devices at scale

IoT development may be so nascent that it may not yet be part of your mainstream DevOps processes—you may still be in the early stages of experimentation. Once you’re ready to scale, you’ll need to bring IoT into the DevOps fold. Needless to say, the scale and costs of dealing with‌ thousands of deployed devices are significant. DevOps is an important approach for ensuring the seamless and efficient delivery of software development, updates, and enhancements to IoT devices. By integrating IoT development into an established workflow, you’ll gain the improved collaboration, agility, assured delivery, control, and traceability that’s part of a modern DevOps process. It’s critical to use a secure deployment process to protect your IoT devices from unauthorized access, inadvertent vulnerabilities, and malware. A secure deployment must include strong authentication methods to access the devices and the management platform. The data that is transmitted between the devices and the management platform should be protected by encryption. The manner in which the client devices connect to the platform after deployment should always be encrypted as well.


In 5 Years, Coding will be Done in Natural Language

“Because AI is a tool,” he adds, that people should be able to operate at a higher level of abstraction and become way more efficient at the job they do. Eventually, everyone is likely to be coding in the natural language, but that wouldn’t necessarily make them a software engineer or a programmer. The skills required to be a coder are far more complex than being able to put prompts in an AI tool, copying the code, or merely typing in natural language. ... Soon, there would be a programming language exclusively in our very own English language. Not to be confused with prompt engineering and writing code, the term natural language programming means that most of the coding would be done by the software in the backend. The programmer would only have to interact with the tool in English, or any other language and never even look at the code. On the contrary, a few experts believe that English cannot be a programming language because it is filled with misunderstandings. “If they’re going into machines, which will affect the lives of people, we can’t afford that level of comedy,” said Douglas Crockford when talking to AIM. 


Cybersecurity's Future: Facing Post-Quantum Cryptography Peril

The post-quantum cryptography era might not be open season on unprepared systems, he says, but rather an uneven landscape. There are layers of concerns to consider. “What I think scares me a little bit is that this type of attack is somewhat quiet,” Ho says. “The people who are going to be taking advantage of this -- the few people initially who have quantum computers as you can imagine, probably state actors -- will want to keep this on the downlow. You wouldn’t know it, but they probably already have access.” ... “From a technical perspective … being quantum-safe -- it’s a binary thing. You either are or you’re not,” says Duncan Jones, head of quantum cybersecurity with quantum computing company Quantinuum. If there is a particular computer system that an organization fails to migrate to new standards and protocols, he says that system will be vulnerable. However, the barrier to entry for access to quantum compute resources may limit the potential for early attackers who already have pockets deep enough to procure the technology. 


The New CISO: Rethinking the Role

CISOs need to be negotiators. They need to argue in favor of stronger security and convince boards and business units of the risks in terms they understand. How a CISO goes about this can vary, depending on whether board members' experience is in technology or business. Providing a demonstration that puts the technical risk into a business perspective can be helpful. CISOs should also talk with other C-level executives — as well as CISOs from other industries — to get advance buy-in and different perspectives on similar conversations they're having with their boards. ... CISOs need to be comfortable developing a risk-based approach focusing on the importance of resiliency, because attackers will get in. Developing a tested plan to respond to attacks is just as important as implementing preventative measures. … it's balancing the risk with the cost. ... CISOs should build a deeply technical team that can focus on key security practices. They should run tabletop exercises on scenarios such as a system shutdown or inability to connect to the Internet. CISOs must not rely on assumptions about how to respond; running through and testing all response plans is vital.


Architect’s Guide to a Reference Architecture for an AI/ML Data Lake

If you are serious about generative AI, then your custom corpus should define your organization. It should contain documents with knowledge that no one else has, and should only contain true and accurate information. Furthermore, your custom corpus should be built with a vector database. A vector database indexes, stores and provides access to your documents alongside their vector embeddings, which are the numerical representations of your documents. ... Another important consideration for your custom corpus is security. Access to documents should honor access restrictions on the original documents. (It would be unfortunate if an intern could gain access to the CFO’s financial results that have not been released to Wall Street yet.) Within your vector database, you should set up authorization to match the access levels of the original content. This can be done by integrating your vector database with your organization’s identity and access management solution. At their core, vector databases store unstructured data. Therefore, they should use your data lake as their storage solution.


5 ways private organizations can lead public-private cybersecurity partnerships

One tangible step that cybersecurity stakeholders can take is to build the bottom-up infrastructure that can meet JCDC’s top-down strategic vision as it attempts to descend into tactical usefulness. This can be done by encouraging the development of volunteer civil cyber defense organizations while simultaneously lobbying the federal government for support of these entities. This kind of volunteer service model is an incredibly cost-efficient way to boost national defense, save federal government resources, and assure private stakeholders about their independence. ... Unfortunately, as criticism of the JCDC emphasizes, top-down P3 efforts often fail to effectively do so due to the role of strategic parameters driving derivative mission parameters. If industry is to shape P3 cyber initiatives CISA’s more clearly toward alignment with practical tactical considerations, mapping out where innovation and adaptation comes from in the interaction of key individuals spread across a complex array of interacting organizations (particularly during a crisis) becomes a critical common capacity.


Decoding tomorrow’s risks: How data analytics is transforming risk management

With digital technologies coming in, corporations can make use of data analytics to ensure goals correlate with their strategic needs. ... Talking about the different risk management strategies, data analytics can contribute towards optimisation models, which directs data-backed resource deployment towards risk mitigation, scenario analysis, which recreates likely circumstances to calculate the effectiveness of different risk mitigation applications, and personalised answers, which supplies custom-fit replies towards certain market conditions. ... “I believe the role of data analytics in risk mitigation has become paramount, enabling organisations to make decisions based on data-driven insights. By leveraging advanced analytics techniques, such as predictive modelling and ML, we can anticipate threats and take measures to mitigate them. From a business perspective, data analytics is considered indispensable in risk management as it helps organisations identify, assess, and prioritise risks. Companies that leverage data analytics in risk management can gain an edge by minimising disruptions, maximising opportunities, and safeguarding their reputation,” Yuvraj Shidhaye, founder and director, TreadBinary, a digital application platform, mentioned.


How AI-Driven Cyberattacks Will Reshape Cyber Protection

Aside from adaptability and real-time analysis, AI-based cyberattacks also have the potential to cause more disruption within a small window. This stems from the way an incident response team operates and contains attacks. When AI-driven attacks occur, there is the potential to circumvent or hide traffic patterns. This is somewhat similar to criminal activity, where fingerprints are destroyed. Of course, the AI methodology is to change the system log analysis process or delete actionable data. Perhaps having advanced security algorithms that identify AI-based cyberattacks is the answer. ... AI has introduced challenges where security algorithms must become predictive, rapid and accurate. This reshapes cyber protection because organizations' infrastructure devices must support the methodologies. It's no longer a concern where network intrusions, malware and software applications are risk factors, but rather how AI transforms cyber protection. The shield is not broken. It requires a transformation practice for AI-based attacks.


Four easy ways to train your workforce in cybersecurity

Do your employees install all kinds of random apps and programs? Do the same thing as the phishing emails: create your own dodgy software that locks the employee's computer, blast it out to the employee database, and see who falls for it. When they have to bring their IT assets in to be unlocked and get a scolding for installing suspicious material, however harmless, the lesson will stick. ... Cyber attacks soar during festive seasons, like the upcoming Holi holiday. Set up automated reminders to your employees to remind them not to blindly open greeting mails or click on suspicious links. You can track the open and read rate of these messages to get an idea of whether people are actually paying attention. ... If your IT team is savvy and has some time to spare, they can use generative AI to create fake personas – someone from another department, a vendor, or a customer – and see if these fake personas can fool people into giving away information they should be keeping confidential. This is particularly important, because many cyber criminals today are already using generative AI to scam unwitting victims. 


Report: AI Outpacing Ability to Protect Against Threats

There are "two sides of the coin" when it comes to AI adoption, said Greg Keller, CTO at JumpCloud. Employee productivity and technology stacks being embedded into SaaS solutions are "the new frontier," he said. "Yet there are universal security concerns. There is fear of commingling or escaping of your data into public sectors. And there is a fear of using one's data on an AI platform. CTOs are concerned about their data leaking through public LLMs," Keller said. ... "We're at the tail end of understanding digital transformation. Now, we are beginning the first phase of the identity transformation. These companies have done an amazing amount of work to lift and shift their technology stacks from legacy into the cloud with one exception - overwhelmingly, it's the Microsoft Active Directory problem," Keller said. "That's still on-premises or self-managed. So they're looking at ways to modernize this. We are in the earliest phases of security shifting away from endpoint-based [security]. Now, it's about understanding access control through the identity, and this is the new frontier."



Quote for the day:

"Whatever you can do, or dream you can, begin it. Boldness has genius, power and magic in it." --Johann Wolfgang von Goethe

Daily Tech Digest - March 19, 2024

Is The Public Losing Trust In AI?

Of course, the simplest way to look at this challenge is that in order for people to trust AI, it has to be trustworthy. This means it has to be implemented ethically, with consideration of how it will affect our lives and society. Just as important as being trustworthy is being seen to be trustworthy. This is why the principle of transparent AI is so important. Transparent AI means building tools, processes, and algorithms that are understandable to non-experts. If we are going to trust algorithms to make decisions that could affect our lives, we must, at the very least, be able to explain why they are making these decisions. What factors are being taken into account? And what are their priorities? If AI needs the public's trust, then the public needs to be involved in this aspect of AI governance. This means actively seeking their input and feedback on how AI is used. Ideally, this needs to happen at both a democratic level, via elected representatives, and at a grassroots level. Last but definitely not least, AI also has to be secure. This is why we have recently seen a drive towards private AI – AI that isn't hosted and processed on huge public data servers like those used by ChatGPT or Google Gemini.


Reliable Distributed Storage for Bare-Metal CAPI Cluster

By default, most CAPI solutions will use the “Expand First” (or “RollingUpdateScaleOut” in CAPI terms) repave logic. This logic will install an additional fresh new server and add it to the cluster first, before then removing an old server. While this is useful to ensure the cluster never has less total compute capacity than before you started the repave operation, it is problematic for distributed storage clusters because you are introducing a new node without any data to the cluster, while taking away a node that does contain data. So instead, we want to use the “Contract First” repave logic for the pool of storage nodes. That way, we can remove a storage node first, then reinstall it and add it back to the cluster, thereby immediately restoring data redundancy. ... So, if a different issue causes the distributed storage software to not install properly on the new node, you can still run into trouble. For example, Portworx supports specific kernel versions, and installing new nodes with a kernel version it doesn’t support can prevent the installation from succeeding. For that reason, it’s a good idea to lock the kernel version that MaaS deploys. Reach out to us if you want to learn how to achieve that.


Evaluating databases for sensor data

The primary determinant in choosing a database is understanding how an application’s data will be accessed and utilized. A good place to begin is by classifying workloads as online analytical processing (OLAP) or online transaction processing (OLTP). OLTP workloads, traditionally handled by relational databases, involve processing large numbers of transactions by large numbers of concurrent users. OLAP workloads are focused on analytics and have distinct access patterns compared to OLTP workloads. In addition, whereas OLTP databases work with rows, OLAP queries often involve selective column access for calculations. ... Another consideration when selecting a database is the internal team’s existing expertise. Evaluate whether the benefits of adopting a specialized database justify investing in educating and training the team and whether potential productivity losses will appear during the learning phase. If performance optimization isn’t critical, using the database your team is most familiar with may suffice. However, for performance-critical applications, embracing a new database may be worthwhile despite initial challenges and hiccups.


Surviving the “quantum apocalypse” with fully homomorphic encryption

There are currently two distinct approaches to face an impending “quantum apocalypse”. The first uses the physics of quantum mechanics itself and is called Quantum Key Distribution (QKD). However, QKD only really solves the problem of key distribution, and it requires dedicated quantum connections between the parties. As such, it is not scalable to solve the problems of internet security; instead, it is most suited to private connections between two fixed government buildings. It is impossible to build internet-scale, end-to-end encrypted systems using QKD. The second solution is to utilize classical cryptography but base it on mathematical problems for which we do not believe a quantum computer gives any advantage: this is the area of post-quantum cryptography (PQC). PQC algorithms are designed to be essentially drop-in replacements for existing algorithms, which would not require many changes in infrastructure or computing capabilities. NIST has recently announced standards for public key encryption and signatures which are post-quantum secure. These new standards are based on different mathematical problems


Teams, Slack, and GitHub, oh my! – How collaborative tools can create a security nightmare

Fast and efficient collaboration is essential to today’s business, but the platforms we use to communicate with colleagues, vendors, clients, and customers can also introduce serious risks. Looking at some of the most common collaboration tools — Microsoft Teams, GitHub, Slack, and OAuth — it’s clear there are dangers presented by information sharing, as valuable as that is to business strategy. Any of these, if not safeguarded or used inappropriately, can be a tool for attackers to gain access to your network. The best protection is to ensure you are aware of these risks and apply the appropriate modifications and policies to your organization to help prevent attackers from gaining a foothold in your organization — that also means acknowledging and understanding the threats of insider risk and data extraction. Attackers often know your network better than you do. Chances are, they also know your data-sharing platforms and are targeting those as well. Something as simple as improper password sharing can allow a bad actor to phish their way into a company’s network and collaboration tools can present a golden opportunity.


Improving computational performance of AI requires upskilling of professionals in Embedded/VLSI area

Implementing AI systems or applications requires intensive computational processors and low-cost power to deploy algorithms. Here, Very Large Scale Integration (VLSI) and embedded system design play a critical role. VLSI design involves the creation and miniaturisation of complex circuits, such as processors, memory circuits, and more recently, customized hardware for AI applications. On the other hand, embedded systems are computing systems for dedicated or specific functionalities, such as smart agriculture or industrial automation. The integration of VLSI with AI has the potential to revolutionise various sectors by enabling faster, more power-efficient, and customised hardware for AI applications. ... AI-based solutions are applied in designing and deploying communication systems to significantly enhance network performance and thereby the overall user experience. Dynamic allocation of resources, such as power and bandwidth, can be done efficiently by AI algorithms, which leads to improved spectral efficiency, reduced interference, and power consumption. Intelligent beam forming using AI algorithms enables wireless systems to focus their power and frequency band for specific users or devices.


Microsoft announces collaboration with NVIDIA to accelerate healthcare and life sciences innovation with advanced cloud, AI and accelerated computing capabilities

Microsoft, NVIDIA and SOPHiA GENETICS are collaborating to leverage combined expertise in technology and genomics to develop a streamlined, scalable and comprehensive whole-genome analytical solution. As part of this collaboration, the SOPHiA DDM Software-as-a-Service platform, hosted on Azure, will be powered by NVIDIA Parabricks for SOPHiA DDM’s whole genome application. Parabricks is a scalable genomics analysis software suite that leverages full-stack accelerated computing to process whole genomes in minutes. Compatible with all leading sequencing instruments, Parabricks supports diverse bioinformatics workflows and integrates AI for accuracy and customization. ... Microsoft aims to propel healthcare and life sciences into an exciting new era of medicine, helping unlock transformative possibilities for patients worldwide. The combination of the global scale, security and advanced computing capabilities of Microsoft Azure with NVIDIA DGX Cloud and the NVIDIA Clara suite is set to accelerate advances in clinical research, drug discovery and care delivery.


How Deloitte navigates ethics in the AI-driven workforce: Involve everyone

The approach to developing an ethical framework for AI development and application will be unique for each organization. They will need to determine their use cases for AI as well as the specific guardrails, policies, and practices needed to make sure that they achieve their desired outcome while also safeguarding trust and privacy. Establishing these ethical guidelines -- and understanding the risks of operating without them -- can be very complex. The process requires knowledge and expertise across a wide range of disciplines. ... On a broader level, publishing clear ethics policies and guidelines, and providing workshops and trainings on AI ethics, were ranked in our survey as some of the most effective ways to communicate AI ethics to the workforce, and thereby ensure that AI projects are conducted with ethics in mind. ... Leadership plays a crucial role in underscoring the importance of AI ethics, determining the resources and experience needed to establish the ethics policies for an organization, and ensuring that these principles are rolled out. This was one reason we explored the topic of AI ethics from the C-suite perspective. 


How to stop data from driving government mad

This would be a start, but everybody in large organisations knows that top-down initiatives from the centre rarely work well at the coalface. If the JAAC is to be effective at converting data into information, what insight could it glean from structures that have evolved to do this? And what could it learn from scientific fields that manage this successfully? First, deep neural networks learn by repeatedly passing information back and forth until every neurone is tuned to achieve the same objective. Information flow in both directions is the key. Neil Lawrence, DeepMind professor of machine learning at the University of Cambridge, notes that in government, "People at the coal face have a better understanding of the right interventions, although not what the central policy might be; a successful centre will have a co-ordinating function driven by an AI strategy, but will devolve power to the departments, professions, and regulators to implement it." Or, as Jess Montgomery, director of AI@Cam says: "Getting government data - and AI - ready will require foundational work, for example in data curation and pipeline building." 


Continuous Improvement as a Team

Conducting regular Retrospectives enables teams to pause and reflect on their past actions, practices, and workflows, pinpointing both strengths and areas for improvement. This continuous feedback loop is critical for adapting processes, enhancing team dynamics, and ensuring the team remains agile and responsive to change. Guarantee the consistency of your Retrospectives at every Sprint's conclusion. Before these sessions, collaboratively plan an agenda that promotes openness and inclusivity. Facilitators should incorporate practices such as anonymous feedback mechanisms and engaging games to ensure honest and constructive discussions, setting the stage for meaningful progress and team development. ... Effective stakeholder collaboration ensures the team’s efforts align with the broader business goals and customer needs. Engaging stakeholders throughout the development process invites diverse perspectives and feedback, which can highlight unforeseen areas for improvement and ensure that the product development is on the right track. Engage your stakeholders as a team, starting with the Sprint Reviews. 



Quote for the day:

“There's a lot of difference between listening and hearing.” -- G. K. Chesterton

Daily Tech Digest - March 18, 2024

Generative AI will turn cybercriminals into better con artists. AI will help attackers to craft well-written, convincing phishing emails and websites in different languages, enabling them to widen the nets of their campaigns across locales. We expect to see the quality of social engineering attacks improve, making lures more difficult for targets and security teams to spot. As a result, we may see an increase in the risks and harms associated with social engineering – from fraud to network intrusions. ... AI is driving the democratisation of technology by helping less skilled users to carry out more complex tasks more efficiently. But while AI improves organisations’ defensive capabilities, it also has the potential for helping malicious actors carry out attacks against lower system layers, namely firmware and hardware, where attack efforts have been on the rise in recent years. Historically, such attacks required extensive technical expertise, but AI is beginning to show promise to lower these barriers. This could lead to more efforts to exploit systems at the lower level, giving attackers a foothold below the operating system and the industry’s best software security defences.


Get the Value Out of Your Data

A robust data strategy should have clearly defined outcomes and measurements in place to trace the value it delivers. However, it is important to acknowledge the need for flexibility during the strategic and operational phases. Consequently, defining deliverables becomes crucial to ensure transparency in the delivery process. To achieve this, adopting a data product approach focused on iteratively delivering value to your organization is recommended. The evolution of DevOps, supported by cloud platform technology, has significantly improved the software engineering delivery process by automating development and operational routines. Now, we are witnessing a similar agile evolution in the data management area with the emergence of DataOps. DataOps aims to enhance the speed and quality of data delivery, foster collaboration between IT and business teams, and reduce the associated time and costs. By providing a unified view of data across the organization, DataOps enables faster and more confident data-driven decision-making, ensuring data accuracy, up-to-datedness, and security. It automates and brings transparency to the measurements required for agile delivery through data product management.


Exposure to new workplace technologies linked to lower quality of life

Part of the problem is that IT workers need to stay updated with the newest tech trends and figure out how to use them at work, said Ryan Smith, founder of the tech firm QFunction, also unconnected with the study. The hard part is that new tech keeps coming in, and workers have to learn it, set it up, and help others use it quickly, he said. “With the rise of AI and machine learning and the uncertainty around it, being asked to come up to speed with it and how to best utilize it so quickly, all while having to support your other numerous IT tasks, is exhausting,” he added. “On top of this, the constant fear of layoffs in the job market forces IT workers to keep up with the latest technology trends in order to stay employable, which can negatively affect their quality of life.” ... “As IT has become the backbone of many businesses, that backbone is key to the businesses operations, and in most cases revenue,” he added. “That means it’s key to the business’s survival. IT teams now must be accessible 24 hours a day. In the face of a problem, they are expected to work 24 hours a day to resolve it. ...”


6 best operating systems for Raspberry Pi 5

Even though it has been nearly seven years since Microsoft debuted Windows on Arm, there has been a noticeable lack of ARM-powered laptops. The situation is even worse for SBCs like the Raspberry Pi, which aren’t even on Microsoft’s radar. Luckily, the talented team at WoR project managed to find a way to install Windows 11 on Raspberry Pi boards. ... Finally, we have the Raspberry Pi OS, which has been developed specifically for the RPi boards. Since its debut in 2012, the Raspberry Pi OS (formerly Raspbian) has become the operating system of choice for many RPi board users. Since it was hand-crafted for the Raspberry Pi SBCs, it’s faster than Ubuntu and light years ahead of Windows 11 in terms of performance. Moreover, most projects tend to favor Raspberry Pi OS over the alternatives. So, it’s possible to run into compatibility and stability issues if you attempt to use any other operating system when attempting to replicate the projects created by the lively Raspberry Pi community. You won’t be disappointed with the Raspberry Pi OS if you prefer a more minimalist UI. That said, despite including pretty much everything you need to use to make the most of your RPi SBC, the Raspberry Pi OS isn't as user-friendly as Ubuntu.


Speaking without vocal cords, thanks to a new AI-assisted wearable device

The breakthrough is the latest in Chen's efforts to help those with disabilities. His team previously developed a wearable glove capable of translating American Sign Language into English speech in real time to help users of ASL communicate with those who don't know how to sign. The tiny new patch-like device is made up of two components. One, a self-powered sensing component, detects and converts signals generated by muscle movements into high-fidelity, analyzable electrical signals; these electrical signals are then translated into speech signals using a machine-learning algorithm. The other, an actuation component, turns those speech signals into the desired voice expression. The two components each contain two layers: a layer of biocompatible silicone compound polydimethylsiloxane, or PDMS, with elastic properties, and a magnetic induction layer made of copper induction coils. Sandwiched between the two components is a fifth layer containing PDMS mixed with micromagnets, which generates a magnetic field. Utilizing a soft magnetoelastic sensing mechanism developed by Chen's team in 2021, the device is capable of detecting changes in the magnetic field when it is altered as a result of mechanical forces—in this case, the movement of laryngeal muscles.


We can’t close the digital divide alone, says Cisco HR head as she discusses growth initiatives

At Cisco, we follow a strengths-based approach to learning and development, wherein our quarterly development discussions extend beyond performance evaluations to uplifting ourselves and our teams. We understand that a one-size-fits-all approach is inadequate. To best play to our employees' strengths, we have to be flexible, adaptable, and open to what works best for each individual and team. This enables us to understand individual employees' unique learning needs, enabling us to tailor personalised programs that encompass diverse learning options such as online courses, workshops, mentoring, and gamified experiences, catering to diverse learning styles. As a result, our employees are energized to pursue their passions, contributing their best selves to the workplace. Measuring the quality of work, internal movements, employee retention, patents, and innovation, along with engagement pulse assessments, allows us to gauge the effectiveness of our programs. When it comes to addressing the challenge of retaining talent, it's essential for HR leaders to consider a holistic approach. 


Vector databases: Shiny object syndrome and the case of a missing unicorn

What’s up with vector databases, anyway? They’re all about information retrieval, but let’s be real, that’s nothing new, even though it may feel like it with all the hype around it. We’ve got SQL databases, NoSQL databases, full-text search apps and vector libraries already tackling that job. Sure, vector databases offer semantic retrieval, which is great, but SQL databases like Singlestore and Postgres (with the pgvector extension) can handle semantic retrieval too, all while providing standard DB features like ACID. Full-text search applications like Apache Solr, Elasticsearch and OpenSearch also rock the vector search scene, along with search products like Coveo, and bring some serious text-processing capabilities for hybrid searching. But here’s the thing about vector databases: They’re kind of stuck in the middle. ... It wasn’t that early either — Weaviate, Vespa and Mivlus were already around with their vector DB offerings, and Elasticsearch, OpenSearch and Solr were ready around the same time. When technology isn’t your differentiator, opt for hype. Pinecone’s $100 million Series B funding was led by Andreessen Horowitz, which in many ways is living by the playbook it created for the boom times in tech.


The Role of Quantum Computing in Data Science

Despite its potential, the transition to quantum computing presents several significant challenges to overcome. Quantum computers are highly sensitive to their environment, with qubit states easily disturbed by external influences – a problem known as quantum decoherence. This sensitivity requires that quantum computers be kept in highly controlled conditions, which can be expensive and technologically demanding. Moreover, concerns about the future cost implications of quantum computing on software and services are emerging. Ultimately, the prices will be sky-high, and we might be forced to search for AWS alternatives, especially if they raise their prices due to the introduction of quantum features, as it’s the case with Microsoft banking everything on AI. This raises the question of how quantum computing will alter the prices and features of both consumer and enterprise software and services, further highlighting the need for a careful balance between innovation and accessibility. There’s also a steep learning curve for data scientists to adapt to quantum computing.


AI-Driven API and Microservice Architecture Design for Cloud

Implementing AI-based continuous optimization for APIs and microservices in Azure involves using artificial intelligence to dynamically improve performance, efficiency, and user experience over time. Here's how you can achieve continuous optimization with AI in Azure:Performance monitoring: Implement AI-powered monitoring tools to continuously track key performance metrics such as response times, error rates, and resource utilization for APIs and microservices in real time. Automated tuning: Utilize machine learning algorithms to analyze performance data and automatically adjust configuration settings, such as resource allocation, caching strategies, or database queries, to optimize performance. Dynamic scaling: Leverage AI-driven scaling mechanisms to adjust the number of instances hosting APIs and microservices based on real-time demand and predicted workload trends, ensuring efficient resource allocation and responsiveness. Cost optimization: Use AI algorithms to analyze cost patterns and resource utilization data to identify opportunities for cost savings, such as optimizing resource allocation, implementing serverless architectures, or leveraging reserved instances.


4 ways AI is contributing to bias in the workplace

Generative AI tools are often used to screen and rank candidates, create resumes and cover letters, and summarize several files simultaneously. But AIs are only as good as the data they're trained on. GPT-3.5 was trained on massive amounts of widely available information online, including books, articles, and social media. Access to this online data will inevitably reflect societal inequities and historical biases, as shown in the training data, which the AI bot inherits and replicates to some degree. No one using AI should assume these tools are inherently objective because they're trained on large amounts of data from different sources. While generative AI bots can be useful, we should not underestimate the risk of bias in an automated hiring process -- and that reality is crucial for recruiters, HR professionals, and managers. Another study found racial bias is present in facial-recognition technologies that show lower accuracy rates for dark-skinned individuals. Something as simple as data for demographic distributions in ZIP codes being used to train AI models, for example, can result in decisions that disproportionately affect people from certain racial backgrounds.



Quote for the day:

"The most common way people give up their power is by thinking they don't have any." -- Alice Walker

Daily Tech Digest - March 17, 2024

Generative AI will drive a foundational shift for companies — IDC

“Over the last year, most organizations debated creating Chief AI Officers and centers of excellence to decide how to embed AI and create new business centers for new AI-enabled products and services,” said Rick Villars, group vice president of IDC’s Worldwide Research division. CIOs are also rethinking their capital investment plans and staffing needs based on AI initiatives, according to Villars, including how AI will affect an organization’s long-term revenue and profitability. Most organizations are likely to choose a hybrid approach to building out their AI plans — that is, companies will partner with service providers while also customizing existing AI platforms such as ChatGPT, as well as building their own proprietary, but smaller, AI models for specific use cases. “All applications you buy will become more intelligent. ... Phil Carter, group vice president of IDC’s Worldwide Thought Leadership Research, said organizations shouldn’t expect an immediate ROI from their investments. Like other major economic shifts, such as arrival of the tractors for farming, the arrival of genAI technology can take decades to achieve widespread adoption and ROI.


Blockchain in trademark and brand protection, explained

Through the use of blockchain technology, firms are able to generate irreversible documentation of product legitimacy. It is possible to provide each product with a unique identification number that allows retailers and customers to instantly confirm its legitimacy. In addition to shielding customers against fake items, this also helps firms preserve their goodwill, ensure data integrity, and win over new customers. Additionally, supply chains benefit from the transparency and traceability that blockchain offers, allowing firms to monitor the flow of goods from manufacturing to distribution. Businesses can use blockchain technology to confirm the legitimacy of products and spot any illegal or fake goods that are trading in the market. ... it might be difficult and expensive to integrate blockchain technology with current systems and procedures. To apply blockchain efficiently, firms might need to redesign their infrastructure and make considerable investments in new technology and knowledge. This can be a major hurdle, particularly for smaller companies with tighter budgets. The implementation of blockchain in brand protection is further complicated by problems with scalability and interoperability.


Open source is not insecure

It’s too easy whenever there is a major vulnerability to malign the overall state of open source security. In fact, many of these highest profile vulnerabilities show the power of open source security. Log4shell, for example, was the worst-case scenario for an OSS vulnerability at a scale and visibility level—this was one of the most widely used libraries in one of the most widely used programming languages. (Log4j was even running on the Mars rover. Technically this was the first intergalactic OSS vulnerability!) The Log4shell vulnerability was trivial to exploit, incredibly widespread, and seriously consequential. The maintainers were able to patch it and roll it out in a matter of days. It was a major win for open source security response at the maintainer level, not a failure. ... But today, most software consumption is occurring outside of distributions. The programming language package managers themselves—npm (JavaScript), pip (Python), Ruby Gems (Ruby), composer (PHP)—look and feel like Linux distribution package managers, but they work a little differently. They basically offer zero curation—anyone can upload a package and mimic a language maintainer.


AI is keeping GitHub chief legal officer Shelley McKinley busy

“I would say that AI is taking up [a lot of] my time — that includes things like ‘how do we develop and ship AI products,’ and ‘how do we engage in the AI discussions that are going on from a policy perspective?,’ as well as ‘how do we think about AI as it comes onto our platform?’,” McKinley said. The advance of AI has also been heavily dependent on open source, with collaboration and shared data pivotal to some of the most preeminent AI systems today — this is perhaps best exemplified by the generative AI poster child OpenAI, which began with a strong open-source foundation before abandoning those roots for a more proprietary play ... “Regulators, policymakers, lawyers… are not technologists,” McKinley said. “And one of the most important things that I’ve personally been involved with over the past year, is going out and helping to educate people on how the products work. People just need a better understanding of what’s going on, so that they can think about these issues and come to the right conclusions in terms of how to implement regulation.” At the heart of the concerns was that the regulations would create legal liability for open source “general purpose AI systems,” which are built on models capable of handling a multitude of different tasks.


Is OpenAI Opening Up To Quantum?

It’s likely that the potential for quantum to solve certain computational tasks critical to OpenAI’s growth is one reason for the quantum feelers, as it were. First, as AI models become more sophisticated, the computational resources required to train them have skyrocketed. Quantum computing offers a potential solution to this bottleneck, promising speed-ups for specific types of computations, including those involved in machine learning and optimization problems. Quantum computers could one day — relying on superposition and entanglement — process vast amounts of data in ways that classical computers will struggle to manage and — again, eventually — use far less economic and environmental resources. ChatGPT CEO Sam Altman recently made headlines for reports that he was seeking $7 trillion to make chips, apparently to feed this massive need for speed and processing power. He’s since said the reports on that figure were inaccurate, but the move still underscores OpenAI’s computational dilemma — grow, but reduce costs and improve performance. In a sentence, then, the potential integration of quantum computing with AI could boost model efficiency.


Flexera 2024 State of the Cloud Reveals Spending as the Top Challenge of Cloud Computing

“This is a complex year for cloud adoption. Organizations are navigating economic uncertainties by investing in generative AI, security, and sustainability while prioritizing cost management,” said Brian Adler, Senior Director, Cloud Market Strategy at Flexera. He further added “Cloud adoption continues to grow. The shift toward hybrid and multi-cloud environments underscores the importance of comprehensive cost management, with nearly half of all workloads and data now in the public cloud. FinOps practices and cloud centers of excellence are growing as companies move toward centralized, strategic cloud management.” The report also shows an increase in multi-cloud usage, increasing to 89% from 87% last year. Sixty-one percent of large enterprises use multi-cloud security, and 57% use multi-cloud FinOps as cost optimization tools. Organizations are taking a centralized approach to cloud with 63% of organizations already having a cloud center of excellence (CCOE) and 14% planning on creating one within the next year. Sustainability has been high on the priority list of organizations.


Cloud CISO Perspectives: Easing the psychological burden of leadership

CISOs are the public face of an organization’s security team, and they sit at the nexus of the security experts, engineers, and developers who report to them, the organization’s security policies, and the executives and board of directors who they report to. They often are blamed for security breaches that occur on their watch, and yet CISOs are not fleeing their jobs — recent data suggests that, despite the stress of the role, they stay at their employer for more than four and a half years at a time. While a CISO who has stayed with one company for five years has clearly demonstrated their dedication to defending their organization’s data and supporting its security teams, it doesn’t mean that they’re happy. High-profile data breaches are on the rise, and government agencies are imposing stricter regulatory requirements including increasing levels of legal accountability (and even personal liability) for their organization’s cybersecurity posture. The stresses CISOs contend with can take a psychological toll, lead to poor decision-making, and even burnout. 


Tech Transformation in Food Technology with AI

AI-driven predictive analytics offer crop management assistance. AI employs historical data, weather patterns, and soil conditions to detect crop yield forecasts, optimal planting times, and potential disease outbreaks. This proactive approach allows farmers to implement preventive measures, adjust farming practices, and mitigate risks, ultimately improving crop quality and quantity. ... Automation is crucial in streamlining food processing operations. AI-powered robotics and machine learning systems automate repetitive tasks such as sorting, grading, and packaging, enhancing efficiency, consistency, and speed. This reduces labour costs and minimises human errors, ensuring uniform product quality and meeting stringent industry standards and consumer expectations. ... AI technologies optimise every aspect of the food supply chain, from farm to fork. AI algorithms optimise logistics by analysing data on transportation, inventory management, and consumer preferences. They minimise transportation costs and reduce food wastage. Real-time monitoring and predictive analytics enable proactive decision-making, ensuring timely delivery and optimal utilisation of resources.


Modernizing Data Management with Karen Lopez

“One thing I’ve found working in the data industry is that there’s always something new coming over the horizon,” Lopez began. “Even so, we can still find ourselves suffering from the same struggles I was working on 35 years ago.” However, she pointed out, although relational databases were the core of everything until about 10 years ago, at that time there was an explosion of other types of databases and data stores -- a fact that makes the addition of the word “modern” much more meaningful than it otherwise might have been. “There are just so many more opportunities for new approaches to data management now,” she added. “I’m usually more of a skeptic when I see ‘modern’ in front of anything,” Lopez said. “There are certain standards, principles, and practices that work even in this new environment. It usually takes someone with a lot of hard-won experience to be able to tell whether one of these new systems or tools is trustworthy. Some of these things may be really exciting, but they just don’t catch on. For example, maybe they’re not scalable or they don’t meet the cost-benefit test -- there are plenty of reasons.”


Navigating Application Security in the AI Era

AI-generated code and organization-specific AI models have quickly become important parts of corporate IP. This begs the question: Can compliance protocols keep up? AI-generated code is typically created by puzzling together multiple pieces of code found in publicly available code stores. However, issues arise when AI-generated code pulls these pieces from open source libraries with license types that are incompatible with an organization’s intended use. Without regulation or oversight, this type of “non-compliant” code based on un-vetted data can jeopardize intellectual property and sensitive information. Malicious reconnaissance tools could automatically extract the corporate information shared with any given AI model, or developers may share code with AI assistants without realizing they’ve unintentionally revealed sensitive information. ... AI can be used to deliberately create malicious, difficult-to-detect code and insert it into open-source projects. AI-driven attacks are often vastly different than what human hackers would create – and different from what most security protocols are designed to protect, allowing them to evade detection. 



Quote for the day:

"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr