Daily Tech Digest - August 06, 2023

California Opens Privacy Probe Into Car Data Collection

Modern vehicles are equipped with a wide range of sensors, cameras, and other technologies that generate vast amounts of data. This data includes information about the vehicle’s location, speed, acceleration, braking, and even driver behavior. Additionally, connected car systems can collect data on music preferences, navigation history, and other personal preferences. Car data is collected by various parties, including automakers, technology companies, and third-party service providers. This data is used for a variety of purposes, such as improving vehicle performance, developing new features, and providing personalized services to consumers. However, concerns have been raised about the potential misuse or unauthorized access to this sensitive information. The investigation by the California Privacy Agency highlights the importance of protecting consumer privacy in the context of car data collection. As vehicles become more connected and autonomous, the amount of data being generated increases exponentially. 


An eventful week in the world of Arm and RISC-V

What’s most intriguing though with all of these coincidental events though is the NXP Semiconductors’ announcement. Almost all the initial investor companies announced in the new, unnamed organization, are also Arm licensees. The press release states: “Semiconductor industry players Robert Bosch GmbH, Infineon Technologies AG, Nordic Semiconductor, NXP Semiconductors, and Qualcomm Technologies, Inc., have come together to jointly invest in a company aimed at advancing the adoption of RISC-V globally by enabling next-generation hardware development.” So, was this strategically timed to coincide with Arm’s annual meet? What’s also intriguing is that the announcement says a new company has been formed but the company isn’t named. Maybe the disclaimer is the added statement that “the company formation will be subject to regulatory approvals in various jurisdictions.” The new unnamed company formed in Germany also “calls on industry associations, leaders, and governments, to join forces in support of this initiative which will help increase the resilience of the broader semiconductor ecosystem.”


How Agile Management Disrupts the Status Quo

As a relatively newer project management methodology, you might wonder how agile differs from the typical or traditional project or team management approach an organization might use—and how it disrupts those traditional approaches. Agile principles are designed to allow for more seamless collaboration, feedback, and flexibility to ensure faster and more thorough success in bringing high-quality products to market. Agile methodology and coaching should focus on bringing together stakeholders, developers, programmers, and end-users to support the underlying principles. This management methodology encourages and facilitates ongoing conversations and regular communication as a primary means of measuring progress with incremental development. However, “incremental” movement doesn’t necessarily translate to slowing down the process. In fact, team member input—and, importantly, user input—ultimately allows for a more effective, functional, and satisfying final product.


A Journey Through Software Development Paradigms

In the quest for seamless collaboration and integration between development and operations, we encounter DevOps, a paradigm that bridges the gap between siloed teams and fosters a culture of continuous integration, delivery, and learning. We explore the triumphs and challenges faced by organizations adopting DevOps, witnessing its potential to accelerate software delivery, improve quality, and enhance customer experiences. Beyond the familiar shores of Agile and DevOps, our journey ventures into the uncharted territories of emerging paradigms, each holding the promise of further transformation. Lean Software Development, Continuous Delivery, and Site Reliability Engineering (SRE) await our exploration, revealing new insights and practices that continue to shape the future of software development. As we reach the culmination of our voyage, we stand in awe of the pioneers and visionaries who have paved the way for progress, embracing adaptation and innovation in the pursuit of excellence. 


The Rise of Emotionally Aware Technology: A Deep Dive into Global Affective Computing

One of the key drivers behind the rise of affective computing is the increasing demand for personalized user experiences. Today’s consumers expect their devices to understand their needs and preferences and to respond accordingly. Emotionally aware technology can meet these expectations by adapting its responses based on the user’s emotional state. For example, a virtual assistant that can detect frustration in a user’s voice could offer to simplify its instructions or provide additional support. Another factor contributing to the growth of affective computing is the advancement in machine learning and AI technologies. These technologies enable computers to learn from data and improve their performance over time, making it possible for them to recognize and interpret complex human emotions. For instance, facial recognition software can now analyze subtle facial expressions to determine a person’s mood, while natural language processing can interpret the emotional tone in written text.


Digital twins: The key to smart product development

In advanced industries, survey data indicate that almost 75 percent of companies have already adopted digital-twin technologies that have achieved at least medium levels of complexity. There is significant variance between sectors, however. Players in the automotive—and aerospace and defense—industries appear to be more advanced in their use of digital twins today, while logistics, infrastructure, and energy players are more likely to be developing their first digital-twin concepts. One major aerospace company is developing a machine-learning-based geometry optimization system that can simulate thousands of different configurations at high speed to identify weight savings, aerodynamic improvements, and other performance benefits. A European software company is building a multiphysics model of the human heart to support drug and medical-device development. In the United States, an automotive company is building a system that can model all the software and hardware configurations it offers. The system will be used to simulate the effect of design improvements before they are delivered to customers as over-the-air updates. 


Four technology disruptions organizations must watch

Digital humans are becoming more and more like real people. They are readily available and have the ability to interact over a screen to handle a service-based issue or provide customer service instantly. As digital human software is integrated with natural language processing and robotic process automation tools, digital humans will become more of a presence in workflows of more and more processes. Consulting leaders should focus, both singly and in tandem, with leaders of other parts of an organization, on crafting approaches their clients can use to leverage a digital human workforce. Service delivery leaders — particularly within business process outsourcing providers — should be developing a strategy to deploy digital humans within their service delivery functions. ... A decentralized autonomous organization (DAO) is a digital entity, running on a blockchain (which provides a secure digital ledger for communication tracking), that can engage in business interactions with other DAOs, digital and human agents, as well as corporations, without conventional human management.


Bitcoin Beyond the Currency – the Disruption of Industries

The Bitcoin economy has the potential to become the biggest economy in the world; bigger than the United States or China. Bitcoin is a solution for everyone in the world who lives in fear of inflation risk, currency risk, or regime risk. A global, decentralized, trustless settlement layer and means of exchange with no state backing or intervention. For that to happen, BTC has to be more than a store value, it has to be a currency. We have to stop thinking about it in terms of market capitalization and start thinking about it in terms of a gross decentralized product, the “GDP” of the Bitcoin economy. One doesn’t talk about the market capitalization of the dollar, we shouldn’t think of Bitcoin in those terms either. Bitcoin is continuing to become increasingly vital as legacy institutions fall behind the strides being made in the technology sector. These breakthroughs are significantly disrupting incumbent industries ranging from those commonly considered such as banking and finance, to more unique industries such as insurance and energy.


Mitigating AI Risks: Tips for Tech Firms in a Rapidly Changing Landscape

Keep in mind: despite their capabilities, large language models can’t tell between what’s real and what’s not. And when asked to verify if something is true, they “frequently invent dates, facts, and figures.” While this stresses the importance of fact-checking on the end-user’s part, you could still face a lawsuit for defamation if any misleading information is published or shared with the public. In fact, ChatGPT-creator OpenAI is already being sued for libel after the system made false accusations against a radio host in the United States, claiming that he had embezzled funds from a non-profit organization. This is the first case of this nature against OpenAI, which could test the legal viability of any future AI-related defamation lawsuits. However, some legal experts believe the case may be challenging to maintain since there were no actual damages and OpenAI wasn’t notified about the claims or given the opportunity to remove them. Beyond defamation, tech firms that deploy large language models in user support systems can also face general liability risks relating to physical harm.


Data Democratization’s Impact on Users and Governance

A key result of increased user involvement in the nuts and bolts of data is the increased importance of data literacy throughout the organization, Stodder added. “It’s essential for organizations to understand what their current capabilities are and to make a plan to address any stumbling block they’re having.” Training tailored to the full range of user personas, from advanced users to more basic data consumers, will be critical to any data democratization effort. ... Another critical aspect of a democratization effort is an effective governance program. “Organizations can easily expand their data programs faster than they expand their governance programs,” Stodder explained, “which, given the existing strain placed on governance by regulations and the complexity of the data landscape, can only compound the problems.” Some of these governance issues can also be exacerbated by the distributed nature of a democratized landscape. “Many organizations are trying to consolidate to a kind of hub-and-spoke model,” Stodder said, “which has been effective for many of them. 



Quote for the day:

“When something is important enough, you do it even if the odds are not in your favor.” --
Elon Musk

Daily Tech Digest - August 05, 2023

ESG & Climate Risk Management: Integrating Environmental Data In Ratings

Integrating environmental data into investment strategies allows investors to identify companies proactively addressing environmental challenges. It provides a comprehensive picture of a company's sustainability practices, particularly relevant in industries susceptible to climate risks, such as energy, agriculture, and transportation. By considering environmental data, investors can assess a company's resilience to climate change, regulatory changes, and physical risks such as extreme weather events. They can also evaluate how well a company aligns with global sustainability goals, such as the United Nations Sustainable Development Goals (SDGs). Moreover, environmental data integration helps investors identify companies capitalizing on emerging opportunities in the transition to a low-carbon economy. Companies with innovative solutions, efficient resource management, and clean technologies will likely thrive in a future characterized by climate-conscious policies and consumer preferences.


How New Tech Elevates Release Management’s Quality Standards

ML and AI technology can help with intelligently choosing when to deploy and roll back a release. In order to recommend deployment tactics, machine learning (ML) models can learn from previous deployment experiences, including success rates, user input and mistake patterns. In order to ensure a more dependable and error-resistant deployment process, AI algorithms can track the deployment process, evaluate the system’s health and initiate automated rollbacks if anomalies or severe concerns are discovered. In general, ML and AI provide valuable capabilities to release management, boosting testing and quality assurance, streamlining release planning, enabling continuous monitoring, analyzing release risks and simplifying wise deployment decisions. Utilizing these technologies enables businesses to improve software quality, streamline release management procedures, and deliver high-performing applications more effectively and reliably. 


Unleash the Power of Open Source: The SONiC Revolution

As businesses embrace connectivity from core to cloud to edge, network management becomes more complex. Open source networking, specifically SONiC, simplifies network management by providing a unified fabric across the entire network ecosystem. SONiC enables organizations to manage and control their networks seamlessly, regardless of the deployment location. This simplification allows for a more consistent and streamlined network management experience, reducing operational complexities and enhancing efficiency. In addition, SONiC is built on standards-based open APIs, providing organizations numerous management platform options to choose from, which is particularly useful for those already invested in other DevOps, Linux-based, or open source solutions. ... Emerging technologies like AI/ML, 5G, and the data boom at the edge are transforming business operations and service delivery. These technologies require a modern infrastructure built on containerized architecture, Ansible automation, and predictive AI/ML monitoring solutions. SONiC easily enables the adoption of these technologies, providing the foundation for next-generation networking.


The Art of Reducing 'Technology Debt' & Winning Digital Transformation Race

Almost every organization is undergoing digital transformation hence a noticeable surge in investments toward cloud automation, cloud computing, and cybersecurity services. However, in this scenario, not only the pace at which organizations should attempt transformation but also about choices they make during this journey is equally important. All transformations are not mandatory. In the IT industry, there is a term called 'technology debt,' which basically means that you are at a specific juncture because of decisions you took years ago. Technical debt is similar to dark matter; you can deduce its impact but cannot see or measure it. So for any company deliberating on large-scale digital transformation, it must ensure to partner with the right tech partners/vendors to get the right perspective on talent and capability. To strike a balance between reaping the benefits of transformation and ensuring security, companies must adopt a multi-faceted approach that can be led by experts effectively. Companies should thoroughly assess potential partners' cybersecurity measures, track records, and commitment to data protection.


Spare Some Change: Emotional Debt, Technical Debt, & Preparing for Change

The tricky thing about emotional debt, in particular, is that the collectors are unpredictable — you can defer payments for months or even years without a single letter or call, then you run into a health scare, an unexpected bill, or just an especially bad day, and all of these feelings that you’ve been deferring come crashing down on you all at once. And it’s important to recognize this emotional debt translates into workplace well-being; a recent Deloitte study indicates that many employees are struggling with unacceptably low levels of well-being. ... I had one of these moments recently. I ran headlong into a mental and emotional wall that I didn’t see coming at all, and at first, it made every part of life overwhelming. These are the moments when understanding and applying a strategic planning framework like I provided in my book, Building the Business of You, provides much-needed structure. I’ve never tackled some of the challenges I’m dealing with in my personal life before, but I have been a manager, which means I’ve helped create strategies to help to manage change and facilitate progress, so I know I can do that.


How to account for hidden costs of digital product development and offset them

Once your digital product is developed and deployed, the journey is far from over. Ongoing software maintenance and troubleshooting are crucial to ensuring your product's stability, security, and optimal performance. However, these aspects can bring forth hidden costs that may catch you off guard if not managed strategically. Software maintenance involves routine updates, bug fixes, security patches, and performance enhancements. Failing to allocate adequate resources to these tasks can lead to a deterioration of your product's functionality and user experience. Additionally, the need for troubleshooting arises when unforeseen issues or bugs emerge post-launch, demanding swift resolutions to avoid disruptions and customer dissatisfaction. ... Outsourcing grants you the flexibility to scale resources based on the current maintenance requirements, saving you from the burden of hiring and training additional staff. You can easily add or reduce resources as needed, ensuring cost-effectiveness and optimized productivity.


India’s Data Privacy Bill is Back

The Bill itself proposes data protection legislation that allows transfer and storage of personal data in some countries while raising the penalty for violations. It suggests consent before collecting personal data and provides stiff penalties to the tune of Rs.500 crore on those that fail to prevent data breaches. The Bill applies to processing digital personal data within the Indian territory and processing it outside of India if such processing is in connection with any profiling or offering goods or services to data principals within India. However, it does not apply to non-automated processing or processing for domestic or personal purposes by individuals and data contained in records that have been in existence for at least 100 years. ... On the issue of consent, the Bill notes that personal data of an individual can only be processed for a lawful purpose for which the concerned individual has given consent or is deemed to have given her consent. It mentions the consent should be free, specific, informed, and unambiguous.


AI Ethics Teams Lack ‘Support Resources & Authority’

Diplomatically approaching one product team after another in hopes of collaborating only gets ethics workers so far. They need some formal authority to require that problems be addressed, Ali says. “An ethics worker who approaches product teams on an equal footing can simply be ignored,” she says. And if ethics teams are going to implement that authority in the horizontal, nonhierarchical world of the tech industry, there need to be formal bureaucratic structures requiring ethics reviews at the very beginning of the product development process, Ali says. “Bureaucracy can set rules and requirements so that ethics workers don’t have to convince people of the value of their work.” Product teams also need to be incentivized to work with ethics teams, Ali says. “Right now, they are very much incentivized by moving fast, which can be directly counter to slowly, carefully, and responsibly examining the effects of your technology,” Ali says. Some interviewees suggested rewarding teams by giving them “ethics champion” bonuses when a product is made less biased or when the plug is pulled on a product that has a serious problem.


A 4-pronged strategy to cut SaaS sprawl

IT leaders must conduct regular software audits to identify and assess the usage of all SaaS applications across the organization. Kamal Goel, senior VP of IT at Hitachi Systems India, says, “Gather data on software adoption, utilization, and user feedback to determine which tools are genuinely adding value to the organization and which ones are underutilized or redundant. This analysis will help you make informed decisions about which subscriptions to keep, downgrade, or terminate.” At a previous employer, Goel’s IT department conducted a comprehensive audit of all SaaS applications and found that multiple teams were subscribed to a project management tool but most only used a fraction of its features. “By switching to a more focused and cost-effective project management tool and discontinuing the old one, the organization saved significant expenses without compromising productivity,” he says. Oftentimes, teams continue to rely on existing systems and processes while the quality of data and intelligence in the new system suffers resulting in its more advanced features becoming superfluous.


API Standardization and Its Role in Next-gen Networking

APIs are becoming more important because of the way applications are built, and services are delivered. Applications are composable, relying on integrated functional elements from different sources. A simple application for most businesses might need a mobile frontend, a link to a backend database, and a processing engine in between. And many might use data from third parties. Standardized APIs would ensure the elements worked together and developers did not have to start from scratch every time they built a new application. ... “Increasingly next-generation services like Network-as-a-Service (NaaS) solutions will be delivered across a system of many providers, and the networks supporting these services will be fully API-driven,” says Pascal Menezes, CTO, MEF. “For this to happen, standards-based automation is required throughout the entire system where all parties adopt a common, standardized set of APIs at both the business process and operational levels.”



Quote for the day:

"Leadership without mutual trust is a contradiction in terms." -- Warren Bennis

Daily Tech Digest - August 04, 2023

Cloud may be overpriced compared to on-premises systems

Public cloud computing prices have been creeping up because they are offered by for-profit companies that must generate a profit. Running a public cloud service is costly, and the billions invested over the past 12 years must show investors a return. That’s why prices have been increasing, not to mention the additional value cloud providers can offer, such as integrated AI, finops, operations, etc. At the same time, the cost of producing hardware, such as traditional HDD storage, has dropped to a new level of confusion. Now it’s a viable alternative to cloud-based storage systems. Thus, it’s not just a quick decision to pick cloud computing over traditional hardware now. ... Again, this is about being entirely objective when looking at all potential solutions, including cloud and on-premises. Cost being equal, cloud computing will be the better choice nine times out of 10, but now that the prices are very different, that may not be the case. If you’re the person making these calls, you must consider all aspects of these solutions, including future criteria. 


Leveraging Cloud DevOps to drive digital optimization and maximizing cloud benefits

Organizations need to consider several factors to ensure a successful implementation that delivers desired business value. They must begin by identifying the business drivers and putting in place a change management program. They must train or upskill their employees and focus on the structural and process changes required to foster collaboration between different functions. Companies must define measurable goals and KPIs and establish governance after considering the prevailing technology landscape and the future roadmap. . They must know and assess their existing infrastructure and portfolio, their requirements, the current challenges they face, the right cloud mix that’s appropriate to their needs, and so on. They must plan their resources and identify their security needs to ensure Cloud DevOps can meet these requirements. ... The success of Cloud DevOps can be measured along four primary dimensions – efficiency, agility, reliability, and quality.


Spatial Data Science: The Basics You Need to Know

“Spatial data science is data science on geospatial data -- location data, navigation data, GPS data, any data that is geocoded,” Kobielus explained. “Geospatial data science builds on and extends the capabilities of geographic information systems.” ... When asked about principal use cases for spatial data science, Kobielus suggested several possibilities. “A core and mainstream enterprise application for spatial data science has been address management. Customer information management needs to be integrated with permanent addressing which then is geocoded so that as your customers move around you always know what their actual address is.” Other possible uses include determining optimal locations for things such as retail outlets or manufacturing facilities, optimizing supply chain logistics, tracking inventory, personalizing user experiences on mobile devices, allowing businesses to provide targeted content, and indoor applications to help organizations optimally arrange things within warehouses or other indoor spaces.


Forecasting Team Performance

A useful technique is systems mapping. To do it, you first, identify qualitative factors and situations affecting your organization. By qualitative, I’m referring to what you can't see on a neat dashboard or chart — the things that only come up in casual 1:1 calls with people, or in water cooler conversations and healthy retrospectives. Next, think about second-order effects. These are the consequences of those qualitative factors, which have further implications and so on. Researchers have shown that second-order effects are a big blindside of the human brain; we think in terms of linear cause-and-effect and so often miss significant domino-effect repercussions. One way of making sure you’re sensitive to these second-order effects is to talk to people that have been in the organization for a long time. They're often the people who are most eager to share the types of stories that will help you draw connections between seemingly unrelated situations.


When Do Agile Teams Make Time for Innovation?

The unintended side effects of shorter sprints and sprint commitments can be devastating for creativity and breakthroughs. Teams that feel pressured by time or fear of failure aren't going to feel safe to experiment. In the absence of psychological safety, innovation recedes. It's critical that agile teams push back against this pressure to deliver and never fail. A recent Harvard Business Review article on psychological safety said, "In essence, agile’s core technology isn’t technical or mechanical. It’s cultural." Or as Entrepreneur.com put it, “Your company needs an innovation culture, not an innovation team. You don't become an innovative company by hiring a few people to work on it while everybody else goes through the motions” As agilists, we must fight to establish experimentation as part of our company culture. Companies that value innovation empower self-organizing teams to try new things, encourage (and fund) continuous learning and improvement, solicit and act on feedback and ideas, and emphasize collaboration and communication.


Microsoft attacked over ‘grossly irresponsible’ security practice

Yoran said the so-called shared responsibility model of cyber security espoused by public cloud providers, including Microsoft, was irretrievably broken if a provider fails to notify users of issues as they arise and apply fixes openly. He argued that Microsoft was quick to ask for its users’ trust and confidence, but in return they get “very little transparency and a culture of toxic obfuscation”. “How can a CISO, board of directors or executive team believe that Microsoft will do the right thing given the fact patterns and current behaviours? Microsoft’s track record puts us all at risk. And it’s even worse than we thought,” said Yoran. “Microsoft’s lack of transparency applies to breaches, irresponsible security practices and to vulnerabilities, all of which expose their customers to risks they are deliberately kept in the dark about,” he added. A Microsoft spokesperson said: “We appreciate the collaboration with the security community to responsibly disclose product issues. 


Managing Partnership Misfits

To get the right stakeholders on board and collaborating, project initiators must combine engagement and containment strategies. And to do this, they need more practical and nuanced guidance along with a new set of lenses through which to assess the suitability of potential partners and to identify, motivate, or control misfits. In other words, they need a tool to identify potential fault lines in future partnerships and to help iron out or contain misalignments. Based on in-depth studies of successful and unsuccessful partnerships, we propose a framework that tests partner fit across three dimensions: task-fit (what each party needs); goal-fit (what each party aims to achieve); and relationship-fit (how each party works). How potential partners measure up on these dimensions flags likely misalignments with a prospective partner and allows project initiators to design ways to overcome them. ... You are looking for a partner with the required capabilities or resources who values the expected gains, which are not just financial rewards, but could relate to learning, inspiration, or reputation. 


Comparing Different Vector Embeddings

In the simplest terms, vector embeddings are numerical representations of data. They are primarily used to represent unstructured data. Unstructured data are images, videos, audio, text, molecular images and other kinds of data that don’t have a formal structure. Vector embeddings are generated by running input data through a pretrained neural network and taking the output of the second-to-last layer. Neural networks have different architectures and are trained on different data sets, making each model’s vector embedding unique. That’s why working with unstructured data and vector embeddings is challenging. Later, we’ll see how models with the same base fine-tuned on different data sets can yield different vector embeddings. The differences in neural networks also mean that we must use distinct models to process diverse forms of unstructured data and generate their embeddings. For example, you can’t use a sentence transformer model to generate embeddings for an image. On the other hand, you wouldn’t want to use ResNet50, an image model, to generate embeddings for sentences.


Could C2PA Cryptography be the Key to Fighting AI-Driven Misinformation?

The C2PA specification is an open source internet protocol that outlines how to add provenance statements, also known as assertions, to a piece of content. Provenance statements might appear as buttons viewers could click to see whether the piece of media was created partially or totally with AI. Simply put, provenance data is cryptographically bound to the piece of media, meaning any alteration to either one of them would alert an algorithm that the media can no longer be authenticated. You can learn more about how this cryptography works by reading the C2PA technical specifications. This protocol was created by the Coalition for Content Provenance and Authenticity, also known as C2PA. Adobe, Arm, Intel, Microsoft and Truepic all support C2PA, which is a joint project that brings together the Content Authenticity Initiative and Project Origin. The Content Authenticity Initiative is an organization founded by Adobe to encourage providing provenance and context information for digital media. 


Multi-modal data protection with AI’s help

First, there is a malicious mind behind the scenes thinking and scheming on how to change a given message for exfiltration. That string for exfil is not intrinsically tied to a medium: it could go out over Wi-Fi, mobile, browser, print, FTP, SSH, AirDrop, steganography, screenshot, BlueTooth, PowerShell, buried in a file, over messaging app, in a conferencing app, through SaaS, in a storage service, and so on. A mind must consciously seek a method and morph the message to a new medium with an adversary and their toolkit in mind to succeed and, in this case, to get points in the hackathon. Second, a mind is required to recognize the string in its multiple forms or modes. Classic data loss prevention (DLP) and data protection works with blades that are disconnected from one another: a data type is searched for with unique search criteria and expected sampling data type and format. These can be simple, such as credit card numbers or social security numbers in HTTP, or complex like looking for data types that look like a contract in email attachments.



Quote for the day:

"Coaching isn't an addition to a leader's job, it's an integral part of it." -- George S. Odiorne

Daily Tech Digest - August 03, 2023

When your teammate is a machine: 8 questions CISOs should be asking about AI

There are many potential benefits that can flow from incorporating AI into security technology, according to Rebecca Herold, an IEEE member and founder of The Privacy Professor consultancy: streamlining work to shorten finish times for projects, the ability to make quick decisions, to find problems more expeditiously. But, she adds, there are a lot of half-baked instances being employed and buyers "end up diving into the deep end of the AI pool without doing one iota of scrutiny about whether or not the AI they view as the HAL 9000 savior of their business even works as promised." She also warns that when "flawed AI results go very wrong, causing privacy breaches, bias, security incidents, and noncompliance fines, those using the AI suddenly realize that this AI was more like the dark side of HAL 9000 than they had even considered as being a possibility." To avoid having your AI teammate tell you, "I'm sorry, Dave, I'm afraid I can't do that," when you are asking for results that are accurate, non-biased, privacy-protective, and in compliance with data protection requirements, Herold advises that every CISO ask eight questions


Generative AI needs humans in the loop for widespread adoption

Generative AI by itself has many positives, but it is currently a work in progress and it will need to work with humans for it to transform the world - which it is almost certain to do. This blending of man and machine is best described as “AI with humans in the loop” and it is already being widely adopted by businesses who want to cut operating costs and improve customer services, but also realise that humans will be crucial if these objectives are to be achieved. One of the sectors embracing this new normal is in financial journalism. Reuters managing director Sue Brooks announced that AI will be used to cover news stories and will create a “golden age” of news. Crucially, she also said it was vital there “was always a human in the loop to ensure total accuracy”. Reuters content now has automated time-coded transcripts and translation of many languages into English, part of the Reuters Connect service. Brooks went on to say that this meld would “free up brain power to be creative and put all these tools in your toolbox to create magical experiences for readers”.


AI chip adds artificial neurons to resistive RAM for use in wearables, drones

According to Weier Wan, a graduate researcher at Stanford University and one of the authors of the paper, published in Nature yesterday, NeuRRAM has been developed as an AI chip that greatly improves energy efficiency of AI inference, thereby allowing complex AI functions to be realized directly within battery-powered edge devices, such as smart wearables, drones, and industrial IoT sensors. "In today's AI chips, data processing and data storage happen in separate places – computing unit and memory unit. The frequent data movement between these units consumes the most energy and becomes the bottleneck for realizing low-power AI processors for edge devices," he said. To address this, the NeuRRAM chip implements a "compute-in-memory" model, where processing happens directly within memory. It also makes use of resistive RAM (RRAM), a memory type that is as fast as static RAM but is non-volatile, allowing it to store AI model weights. 


The CISO role has changed, and CISOs need to change with it

Perhaps the best way to improve security—and make the CISO’s job a little easier—is not reliant on technology. A change in culture is the best way to truly create an organization where security is top of mind. CISOs, part of upper management, but also part of the security team, are uniquely positioned to lead this change – both with other leaders and those they lead. A security-first culture requires embedding security in everything a business does. Developers should be enabled to create secure code that is free from vulnerabilities and resistant to attacks as soon as it is written, as opposed to being a consideration much later in the SDLC. Designated security champions from the developer ranks should lead this charge, acting as both coach and cheerleader. This approach means that security is not being mandated from above, but part of the team’s DNA and backed up by management. This cannot be an overnight change, and may be met with resistance. But the threat landscape is too complex, too advanced and too ubiquitous for any one person or even a small team to handle alone.


Hosting Provider Accused of Facilitating Nation-State Hacks

The allegations, whether true or not, are a reminder that cybercrime doesn't operate in a vacuum. Rather, there's a burgeoning service and support ecosystem. Services include initial access brokers who provide on-demand access to victims, botnet owners who facilitate malware-laden phishing attacks, and repacking services that make malware tougher to spot. They also include ransomware-as-a-service operators who lease their code to business partners, the affiliates who use it to infect victims, and cryptocurrency money laundering services that help criminals - operating online or off - convert their ill-gotten gains into cash. Online attackers require infrastructure for launching their attacks. Some make use of bulletproof service providers, which provide VPS and other types of hosting services in return for a promise, typically for a relatively high fee, that customers can do whatever they like. Halcyon's report alleges that Cloudzy functionally operates in a similar manner, due to a lack of proper oversight, including allowing cryptocurrency-using customers to be able to remain anonymous.


The tug-of-war between optimization and innovation in the CIO’s office

The downside of prioritizing optimization is the risk of overlooking opportunities for innovation that could have long-term impacts on the organization’s growth and relevance. Think game-changing new systems, such as AI, that increase supply chain efficiency, or automating steps in manufacturing that speeds up productivity and reduces costs at the same time. Usually, the value of a business is directly defined by the innovations that can drive it. Think about the services we use now, from food delivery to home sharing, with the draw being better customer experiences through innovation. Emphasizing innovation enables companies to stay ahead of the curve, attracting customers with cutting-edge products and services. ... These mistakes will kill a company. Taking resources away from innovation and spending them on making things work as they should removes business value. I think we’re going to see a great many businesses spend so much money to fix past mistakes that they’ll end up throwing in the towel. 


Flight to cloud drives IaaS networking adoption

IDC describes IaaS cloud networking as a foundational networking layer that allows large enterprises and technology providers to connect data centers, colocation environments, and cloud infrastructure. With IaaS networking, the network infrastructure and services are scalable and available on-demand, provisioned and consumed just like any other cloud service. That makes this infrastructure more scalable and agile than traditional approaches to networking, according to IDC. Direct cloud connects/interconnects is the largest segment of IaaS networking, accounting for more than half of all IaaS networking revenue. The four other major segments of the IaaS networking market are cloud WAN (transit), IaaS load balancing, IaaS service mesh, and cloud VPNs (to IaaS clouds), according to IDC. Cloud WAN, which includes cloud middle-mile and core transit networks, is the fastest-growing segment of IaaS networking, with a forecasted five-year compound annual growth rate of 112%, says IDC. IaaS service meshes are also expected to see strong growth, with a forecasted five-year compound annual growth rate of 68%.


The rise of Generative AI in software development

AI is accelerating the process of going from zero to one – it jumpstarts innovation, releasing developers from the need to start from scratch. But the 1 to n problem remains – they start faster but will quickly have to deal with issues like security, governance, code quality, and managing the entire application lifecycle. The largest cost of an application isn't creating it – it's maintaining it, adapting it, and ensuring it will last. And if organisations were already struggling with tech debt (code left behind by developers who quit, vendors who sunset apps and create monstrous workloads to take care of) now they'll also have to handle massive amounts of AI-generated code that their developers may or may not understand. As tempting as it may be for CIOs to assume they can train teams on how to prompt AI and use it to get any answers they need, it might be more efficient to invest in technologies that help you leverage Gen AI in ways that you can actually see, control and trust. This is why I believe that in the future, fundamentally, everything will be delivered on top of AI-powered low-code platforms. 


Will law firms fully embrace generative AI? The jury is out | The AI Beat

On one hand, gen AI is shaking up the legal industry, with companies like Everlaw adding options to their product portfolio, while Thomson Reuters can integrate with Microsoft 365 Copilot to power legal content generation directly in Word. On the other hand, lawyers tend to be a conservative bunch — and in this case, attorneys would likely be wise to be cautious, with headlines like “New York lawyers sanctioned for using fake ChatGPT cases in legal brief” going viral. Another problem is that their clients may not feel comfortable with law firms using gen AI — a new survey found that one-third of consumer respondents said they’re against any use of gen AI in the legal field. ... But with Everlaw’s new gen AI now available in beta, lawyers can go beyond just clustering data at the aggregate level to querying, summarizing and otherwise extracting details from documents to get what they need. For example, the company says that while it typically takes hours for a legal professional to compose a statement of facts, it can now happen in about 10 seconds, delivering legal teams a rough draft to edit and fact check. 


Vulnerability Management: Best Practices for Patching CVEs

In a perfect world, you would analyze all CVEs first to determine the priority order for patching. But this just isn’t scalable due to the sheer number of vulnerabilities and how frequently CVEs are discovered. In reality, only a handful of CVEs actually affect your software. Of course, there’s no way to know for certain how a CVE affects your application until it has been analyzed, but because there are so many, including those from transitive dependencies, it is nearly impossible to analyze them all before new CVEs are discovered or in the time between a tight release schedule. Instead, we recommend you start by patching all critical and high-severity CVEs without analysis. ... Preventing, detecting and patching CVEs needs to be a shared responsibility between developers and security teams. It is not sustainable for security teams to bear the responsibility of managing and patching CVEs alone. Development teams can often be hesitant to push frequent updates for fear that updates to software libraries will create bugs in their software.



Quote for the day:

"Our greatest battles are with our own minds." -- Jameson Frank

Daily Tech Digest - August 02, 2023

Return-to-office mandates rise as worker productivity drops

In the first quarter of 2023, labor productivity dropped 2.1% in the US, even as the number of hours worked increased by 2.6%, according to the BLS. The highest levels of remote workers are in North America and Northern Europe, with lower levels in Southern Europe, and even fewer still in Asia — particularly in developing countries, according to a study by Stanford University’s Institute for Economic Policy Research (SIEPR) released in July. ... “Bosses want workers back in the office; workers want flexibility,” said Peter Miscovich, the managing director of Jones Lang LaSalle IP (JLL), a global real estate investment and management firm that tracks remote work trends. But current return-to-office mandates haven't always been effective and they risk driving employees away, according to Miscovich. "Given current low-unemployment rates — particularly in technology fields — talent has the upper hand and will have the upper hand over the next 10 to 15 years,” Miscovich said. While some companies have drawn attention for heavy-handed tactics to get employees back to the office, others are succeeding for getting buy-in for structured hybrid work policies.


IT professionals: avoiding bad days at work

The most common cause of stress is work-related, with one recent study showing that 79% of UK professionals say they frequently feel stressed and our own research revealed that over two-thirds of IT leaders(70%) reported that there is pressure to deliver security protection in a short amount of time. Whilst organisations must be able to identify the sources of stress to support their people, unfortunately, it must be noted that due to the nature of working with technology, IT professionals will encounter stressful situations – whether the solution is to turn it off and on again or something much more serious. Having the right mix of people, processes and technology will assist in minimising these situations; however, when they do occur, it is vital that leaders are able to recognise these situations and support their people This comes back to ensuring the most appropriate technology is in place, along with having clear plans and processes in place to best support the needs of the organisation, its people and its customers.


Why synthetic data is a must for AI in telecom

Synthetic data reflects real-world data both mathematically and statistically. But rather than being collected from and measured in the real world, it is created by computer simulations, algorithms, simple rules, statistical modeling, simulation and other techniques based on small, anonymized real-world samples. “While real data is almost always the best source of insights from data, real data is often expensive, imbalanced, unavailable or unusable due to privacy regulations,” Gartner VP analyst Alexander Linden said in a Q&A blog post. “Synthetic data can be an effective supplement or alternative to real data.” Artificial data can help mitigate weaknesses in real data or can be used when no live data exists, when data is highly sensitive or otherwise biased, or can’t be used, shared or moved. But it doesn’t always have to be trained on real data, however: It can be generated just by looking at domain or institutional knowledge or traces of real data. With the massive explosion in the use of data-hungry generative AI models and the necessity of privacy and security, enterprises across industry segments are recognizing the potential in synthetic data


DDoS Attacks and the Cyber Threatscape

Occasionally, DDoS attacks were carried out to extort ransom payments, colloquially known as Ransom DDoS (RDDoS) attacks. The RDDoS attack should not be mistaken for ransomware, which may be driven by similar motivations but employs different tactics, techniques, and procedures (TTPs). The operational method in ransomware requires ‘denial of data’ by a malicious script, whereas RDDoS involves denial of service, generally by a botnet. Running a ransomware operation requires access to internal systems, which is not the case in ransom DDoS attacks. In RDDoS, threat actors leverage the threat of denial of service to conduct extortion, which may include sending a private message by email demanding ransom amount to prevent the organisation from being targeted by a DDoS attack. According to a threat intelligence report, throughout the 2020–2021 global RDDoS campaigns, attacks ranged from few hours up to several weeks with attack rates of 200 Gbps and higher. The DDoS attack can also serve as a means of reconnaissance, allowing attackers to assess the target’s vulnerabilities and gauge the strength of its defenses.


MDM’s Role in Strengthening Data Governance Practices

Ensuring regulatory compliance and the trustworthiness of data is paramount. This is where a systematic process comes into play, and Gartner MDM is leading the way in providing a comprehensive solution. With the ability to configure data governance policies, capture metadata, and perform data lineage, Gartner MDM allows for a full understanding of data assets and their use. This translates into improved compliance, reduced risk, and enhanced data trustworthiness. By implementing a systematic process that includes Gartner MDM, organizations can confidently navigate the complex landscape of regulatory requirements, safeguard data integrity, and ultimately increase customer trust. ... Data Governance has become essential with the ever-increasing amount of data organizations generate. However, manually reviewing and managing such a large amount of data can be challenging and time-consuming. This is where automation techniques come into play. By automating data governance processes, organizations can streamline the process, reduce errors, and make better decisions resulting from the data. 


Delivering privacy in a world of pervasive digital surveillance: Tor Project’s Executive Director speaks out

Our stance is clear, we think that encryption is a right – which is why it is built into our technology. As more and more aspects of our lives are carried out digitally, whether it is conducting financial transactions, accessing health care services or staying in touch with friends and loved ones, our online activity should be governed by the same rights to privacy and anonymity as our analog experiences. As part of our work, the Tor Project is currently active in the debate around the need to safeguard EE2E. We are engaged in advocacy work on the issue and have supported other organizations in their efforts to raise awareness, especially as part of the Global Encryption Coalition. ... Earlier this year, we launched the Mullvad Browser, a free, privacy-preserving browser offering similar protections as Tor Browser without the Tor network. Mullvad Browser is another option for internet users who are looking for a privacy-focused browser that doesn’t need a bunch of extensions and plugins to enhance their privacy and reduce the factors that can accidentally de-anonymize themselves.


The Debate Around AI Ethics in Australia is Falling Far Behind

In 2016, the World Economic Forum looked at the top nine ethical issues in artificial intelligence. These issues have all been well-understood for a decade (or longer), which is what makes the lack of movement in addressing them so concerning. In many cases, the concerns the WEF highlighted, which were future-thinking at the time, are starting to become reality, yet the ethical concerns have yet to be actioned. ... The WEF noted the potential for AI bias back in its initial article, and this is one of the most talked-about and debated AI ethics issues. There are several examples of AI assessing people of color and gender differently. However, as UNESCO noted just last year, despite the decade of debate, biases of AI remain fundamental right down to the core. “Type ‘greatest leaders of all time’ in your favorite search engine, and you will probably see a list of the world’s prominent male personalities. How many women do you count? An image search for ‘school girl’ will most probably reveal a page filled with women and girls in all sorts of sexualised costumes. ...”


Vigilance advised if using AI to make cyber decisions

Artificial intelligence (AI) and machine learning (ML) driven tools and technologies are on the rise to help organizations address these challenges by significantly improving their security posture efficiently and effectively. Tools using ML and AI are improving accuracy and speed of response. ... The vendor may have utilised AI in various product development stages. For instance, AI could have been employed to shape the requirements and design of the product, review its design or even generate source code. Additionally, AI might have been used to select relevant open-source code, develop test plans, write the user guide or create marketing content. In some cases, AI could be a functional product component. However, it’s important to note that sometimes an AI capability might really be machine learning (ML). Determining the legitimacy of AI claims can be challenging: the vendor’s transparency and supporting evidence are crucial. Weighing the vendor’s reputation, expertise and track record in AI development is vital for distinguishing authentic AI-powered products from “snake oil.”


3 GitOps Myths Busted

It is highly likely that as your organization embarks on its cloud native journey, there will come a point where scaling to multiclusters becomes necessary. For instance, developers may need to work on and test applications before making pull requests without having direct access to the production code, of course, for applications running in production on Kubernetes. Moreover, in certain scenarios, a team might manage multiple clusters and distribute workloads among them to ensure sufficient fault tolerance and availability. For example, when running a machine learning training workload, the team might increase the number of replicas or cluster replicas to meet specific demands. Additionally, different clusters may be deployed across various physical locations in cloud environments, whether on Amazon Web Services, Azure, GCP and others, requiring separate tools and processes to align with geographic mandates, legal restrictions, compliance requirements, and data access policies.


Simplifying IT strategy: How to avoid the annual planning panic

In developing your strategy, you have two responsibilities related to the finances of any proposed project: First, you must articulate the costs and benefits of the project; and second, you must contextualize those costs and benefits by comparing them to overall budget projections, which should include multi-year projections that align with the needs and norms of your finance organization. Not sure how to frame the numbers? Borrow revenue projections from FP&A, then layer in projected IT run-rate spend, IT project spend for each year in the forecast, and summarize total IT spend as a percentage of revenue. Hint: Be ready to explain any increase in this metric. ... What will you need from others for your plan to succeed? Dedicated resources from BUs and functions? Participation in steering committees? Incremental funding? The point is you can’t drive a transformation alone. Key to success will be clarifying roles and responsibilities and ensuring others have skin in the game. ... Once you’ve tried answering the questions, consult your deputies. Test and refine your hypothesis as a group. 



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham