Daily Tech Digest - August 10, 2023

AMD's Zen architecture: The fundamentals of these Zen 4 CPUs

While the computing industry, CPU enthusiasts, and even AMD itself expected the road to performance leadership to be long, it was actually quite short. Zen 2, the successor to Zen, launched in 2019 and shocked pretty much everyone by blowing Intel out of the water. AMD racked up a massive lead in multi-threaded performance in pretty much every segment, had significantly better power efficiency in virtually every workload, and even surpassed Intel in single-threaded performance, which AMD hadn't been able to do for over a decade. From here, the road just got easier for AMD. The server market was (and still is) the most important area for AMD to make progress in, and by the time Zen 3 came out in 2020, AMD controlled 7% of the market, up from nearly 0% before Zen came out. This was made all the easier thanks to how Intel absolutely screwed up its plans to launch powerful 10nm CPUs, leaving AMD to face off against outdated and practically obsolete 14nm chips, which are some of the worst Intel has ever made.


Embracing the ‘Pedagogy of Error’ in Cybersecurity Education

The lesson I am always reminded of is that “we must abandon certainties in order to build from the challenge of uncertainty.” The deeper we delve into global instabilities and their challenges, the better perspectives and questions we can ask ourselves. It would be very sad to know that everything has been solved. Therefore, when we challenge current knowledge and explore different alternatives, we are opening up the possibility of seeing beyond what is known and, therefore, introducing something different. ... The academy must maintain and motivate curiosity, expectations, challenges and adventures that arise when uncertainty manifests itself from the inevitability of failure. In this sense, motivate the pedagogy of “error.” That is, understanding the “error” as part of the process and not as a result is what makes it possible to create cybersecurity and IT professionals open to constantly learn, to let themselves be questioned in their previous knowledge and to maintain a proactive stance in the face of adversaries’ challenges.


The dark side of the cloud: How cloud is becoming prey to sophisticated forms of cyber attack

As businesses increasingly adopt cloud-based solutions, cyber criminals—who are constantly looking for new vulnerabilities to exploit—are finding it easier to engineer data breaches, explains Rajesh Garg, EVP, Chief Digital Officer & Head of Applications & Cybersecurity at data centre service provider Yotta Data Services. Around 98 per cent of organisations globally now utilise some form of cloud-based tech, while many have adopted multi-cloud deployments from multiple cloud service providers. The massive adoption of the cloud environment has also given rise to Shadow IT, where employees or departments use hardware or software from external sources without the knowledge of the IT or security group of the organisation. This creates a vacuum, where the responsibility of managing security within organisations is not clearly defined. “Cloud infrastructure is inherently complex; that increases manifold with the addition of hybrid and multiple-cloud models,” says Atul Gupta


Google Cloud launches Chronicle CyberShield to help government agencies tackle threats

A primary component of Chronicle CyberShield is establishing a modern government security operations center (SOC), comprising a network of interconnected SOCs to scale and aggregate security threats, Google Cloud said in a press release. Chronicle CyberShield enables governments to leverage cyber threat intelligence from Google and Mandiant, now part of Google Cloud, to build a scalable and centralized threat intelligence and analysis capability, according to the firm. This is integrated operationally into the government SOC to identify suspicious indicators and enrich the context for known vulnerabilities. The solution also allows governments to build a coordinated monitoring capability with Chronicle SIEM to simplify threat detection, investigation, and hunting with the intelligence, speed, and scale of Google. By implementing Chronicle across a network of SOCs, attack patterns and correlated threat activity across multiple entities are available for investigation and analysis. 


International implications of hack-for-hire services

A lack of consequences for hackers that contract themselves out to foreign clients has only encouraged the hack-for-hire industry in India. US prosecutors indicted Sumit Gupta, the Director of Indian hacking firm BellTroX in 2015 for hacking on behalf of two American lawyers, yet the Indian government never took action against him. After he failed to be convicted in 2015, BellTroX went on to commit the Dark Basin hacks in 2020. BellTroX also surfaced as part of a criminal case against an Israeli private detective who hired Indian hacking firms on behalf of unnamed clients in Israel, Europe, and the US. The private detective pleaded guilty in 2022, but the hackers in India have yet to face any legal consequences. BellTroX also surfaced as part of a criminal case against an Israeli private detective who hired Indian hacking firms on behalf of unnamed clients in Israel, Europe, and the US. This lack of enforcement is not because India does not have the legal infrastructure to prosecute cybercrimes; the Information Technology Act of 2000, and its subsequent amendments in 2008 


Windows Defender-Pretender Attack Dismantles Flagship Microsoft EDR

In studying the Windows Defender update process, Bar and Attias discovered that signature updates are typically contained in a single executable file called the Microsoft Protection Antimalware Front End (MPAM-FE[.]exe). The MPAM file in turn contained two executables and four additional Virtual Device Metadata (VDM) files with malware signatures in compressed — but not encrypted — form. The VDM files worked in tandem to push signature updates to Defender. The researchers discovered that two of the VDM files were large sized "Base" files that contained some 2.5 million malware signatures, while the other two were smaller-sized, but more complex, "Delta" files. They determined the Base file was the main file that Defender checked for malware signatures during the update process, while the smaller Delta file defined the changes that needed to be made to the Base file. Initially, Bar and Attias attempted to see if they could hijack the Defender update process by replacing one of the executables in the MPAM file with a file of their own. 


Securing The Future: Embracing Cloud-Centric Cybersecurity Strategies

Upskilling an entire cybersecurity organization is a significant undertaking that requires planning, time, funding and—most importantly—leadership buy-in. CISOs won't be able to snap their fingers and transform their teams into the cloud-literate leaders of tomorrow. After all, it could take up to six months of training just to have an intelligent-sounding conversation about the cloud—least of all, be productive. Fortunately, much of the educational infrastructure necessary for upskilling workforces is available. Cloud service providers AWS, Microsoft Azure and Google Cloud each have a portfolio of cloud computing certifications. Platforms such as A Cloud Guru and Cloud Academy offer multi-cloud training. Security-focused cloud training and certifications are available from organizations such as the SANS Institute, (ISC)2 and the Cloud Security Alliance. ... These senior leaders are generally no longer "hands on keyboard" professionals. They lead programs, set priorities and assign goals. Of course, they need to be conversant with the technology their organization uses. 


Northern Ireland Police at Risk After Serious Data Breach

"This is the most serious breach I have ever seen, due to the potential it could lead to the death or injury of those whose data has been disclosed," said Brian Honan, who heads Dublin-based cybersecurity firm BH Consulting. Exposed information could be abused not only by criminals, including for revenge, but also by republican paramilitaries who continue to target police officers and employees. The most recent attack occurred in February, when off-duty senior detective John Caldwell was shot in a sports complex in Omagh. He survived with "life-changing" injuries, said the chairman of Northern Ireland's Police Federation. Authorities arrested 11 people and charged three with being members of a proscribed terrorist group - in this case, the New IRA, a splinter of the Provisional Irish Republican Army that rejects a final 1997 terrorism cease-fire that helped lead to the 1998 Good Friday Agreement. The PSNI says it is working to "to identify any security issues" posed by the breach as quickly as possible, and it has notified the Information Commissioner's Office.


Ethics as a process of reflection and deliberation

You can integrate ethics into your projects by organising a process of ethical reflection and deliberation. You can organise a three-step process for that:Put the issues or risks on the table – things that you are concerned about, things that might go wrong. Organise conversations to look at those issues or risks from different angles – you can do this in your project team, but also with people from outside your organisation. Make decisions, preferably in an iterative manner – you take measures, try them out, evaluate outcomes, and adjust accordingly. A key benefit of such a process is that you can be accountable; you have looked at issues, discussed them with various people, and have taken measures. Practically, you can organise such a process in a relatively lightweight manner, e.g., a two-hour workshop with your project team. Or you can integrate ethical reflection and deliberation in your project, e.g., as a recurring agenda item in your monthly project meetings, and involve various outside experts on a regular basis.


6 legal ‘gotchas’ that could sink your CIO career

You might be thinking that your company will defend you for liability, and you might be right if your company has liability coverage for its officers, and you are an officer. But does your company have liability insurance for its executives? It’s standard for most Fortune 500 companies to have liability insurance for their executives, but a substantial number of private and not-for-profit companies are facing challenges in rising premiums and may not have liability protection. If you’re interviewing for a CIO job, it’s prudent to find out whether the company you’re interviewing with offers liability protection and indemnification insurance for its executives. ... When CIOs are sued or fired, it’s often because of a significant cybersecurity breach. The reason for this is because CIOs are ultimately responsible for safeguarding corporate information. When a breach occurs, it is always perceived as being on the CIO’s watch, and the repercussions can be severe. 



Quote for the day:

"We learn by example and by direct experience because there are real limits to the adequacy of verbal instruction." -- Malcolm Gladwell

Daily Tech Digest - August 09, 2023

You can’t run away from technical debt

It could be poor architecture because IT leaders picked the less efficient path to a solution. Perhaps they went with a specific vendor, even a cloud provider, for the wrong reasons, such as a preexisting relationship. This led to a solution that functions but adds instead of removes technical debt. I’ve heard the excuses: A decision was made to expedite solution delivery for an urgent business purpose. However, that’s almost never the case. Most of the time technical debt accumulates from misguided decisions; the company could have gone in a direction that did not create technical debt but did not. Indeed, many of the better solutions would have cost less money and taken less time to deploy. In other words, most of the technical debt is a collection of self-inflicted wounds, usually caused by leaders who don’t bother to understand the bigger picture and take technological shots in the dark. Of course, “it works,” but it significantly increases technical debt. I’ve second-guessed a great many of these in my 40-year career.


Australia’s Banking Industry Mulls Better Cross-Collaboration to Defeat Scam Epidemic

The Australian banking sector, for its part, has already been looking for ways to work together to combat fraud. In May, 17 banks announced that, thanks to a collaboration between them, they had been able to halve the time it takes to identify and block payments to scam operators. This effort is powered by the ABA’s Fraud Reporting Exchange. This initiative cross-matches data between participating banks and allows for nearly real-time communication of fraudulent transactions across the network. Other government initiatives, meanwhile, include the new National Anti-Scams Centre, which went live on July 1. This organization will enable faster sharing of information, so police and regulators can act on scams more quickly. There will also be an Australian Sender SMS ID registry that will provide a “whitelist” of phone numbers that can be used to block scam calls and SMS messages that supposedly come from government agencies.


6 ways CIOs sabotage their IT consultant’s success

Here’s a promise made during negotiations that’s often DOA once the project starts: The client will provide the consultant with the information necessary for the project to move forward. Of course, once the project starts, it turns out that nobody in the client organization can provide that information. Why would the client make a promise like this? One reason: Whoever in the client organization is responsible for providing the information isn’t willing to admit that they can’t, either to their boss or to the consultants. In the short term it’s safer to make the promise and kick the can down the road, until the project has been going on long enough to shift the blame to those damned consultants who keep on making unrealistic requests of IT staff who are already overworked and underpaid. (Take a deep breath.) There’s another reason some clients can’t deliver information on demand: They’ve outsourced the IT functional area responsible for the information needed, and the outsourcer isn’t willing to help out consultants they see as likely competitors.


Technical vs. Adaptive Leadership

While technical leadership is essential, it does come with limitations. Relying solely on technical prowess can lead to a narrow focus, overlooking broader organizational dynamics and human factors. Additionally, in an ever-changing environment, technical skills can become outdated, necessitating a constant commitment to learning and adapting. Adaptive leadership, on the other hand, revolves around the ability to navigate uncertainty, ambiguity, and change. It is a leadership approach that focuses on guiding teams and organizations through transformational periods. Adaptive leaders are skilled at fostering resilience, encouraging creative problem-solving, and inspiring a culture of continuous learning. Adaptive leaders excel in communication and emotional intelligence. They possess the capacity to connect with their teams on a deeper level, empathizing with their challenges and aspirations. This ability to understand and relate to individuals creates an environment of trust, openness, and collaboration. 


Why big tech shouldn’t dictate AI regulation

Formed initially of Anthropic, Google, Microsoft, and OpenAI, the Forum is presented as an industry body which will ensure the ‘safe and responsible development of frontier AI models’. While not defined by the Forum’s initial press release, ‘frontier AI models’ can be understood to be general-purpose AI models which, in the words of the Ada Lovelace Institute, ‘have newer or better capabilities’ than other models. The forum’s objectives include undertaking AI safety research; disseminating best practices to developers; and collaborating with parties like academics, policymakers, and civil society bodies to influence the design and implementation of AI ‘guardrails’. Membership, meanwhile, will be restricted to organisations which (in the Forum’s eyes) both develop frontier models, and are committed to improving their safety. Admittedly, questions around the safe and effective development of AI will not arrive without investment, so it is encouraging to see a commitment to this collaborative approach amongst prominent AI vendors. Likewise, effective AI regulation will rely on input from those with real domain expertise: the industry’s doors must remain open to governments and policymakers.


Introduction to Apache Arrow

Apache Arrow is a framework for defining in-memory columnar data that every processing engine can use. It aims to be the language-agnostic standard for columnar memory representation to facilitate interoperability. Several open source leaders from companies also working on Impala, Spark and Calcite developed it. Among the co-creators is Wes McKinney, creator of Pandas, a popular Python library used for data analysis. He wanted to make Pandas interoperable with data processing systems, a problem that Arrow solves. ... Another benefit of Apache Arrow is its integration with Apache Arrow Flight SQL. Having an efficient in-memory data representation is important for reducing memory requirements and CPU and GPU load. However, without the ability to transfer this data across networked services efficiently, Apache Arrow wouldn’t be that appealing. Luckily Apache Arrow Flight SQL solves this problem. Apache Arrow Flight SQL is a “new client-server protocol developed by the Apache Arrow community for interacting with SQL databases that makes use of the Arrow in-memory columnar format and the Flight RPC framework.”


How to develop an intrapreneurial culture

A company that wants to inspire intrapreneurship needs to have the ability to mobilize resources across the organization to support the opportunities it surfaces, which can carry execution and reputational risks. But because of the substantial potential upsides, encouraging intrapreneurship should be central to an organization’s mission. Take the example of the Happy Meal, which has been pivotal to the growth of McDonald’s: the idea came from a maverick internal team. The Sony PlayStation became the first gaming console to ship over 100 million units—though it required internal champions to pick up the pieces from a failed external partnership. Southwest Airlines’ humorous safety announcements—pioneered by the airline’s founder as an integral part of the business model—have enhanced its customer experience and business. When intrapreneurship is encouraged, there’s evidence that people enjoy greater autonomy and a stronger connection to the organization’s purpose; not surprisingly, this leads to higher productivity and engagement. What does it take to develop more of this culture, and then to apply it? It’s not an exact science, but there are ways to give your intrapreneurs a leg up.


How Emotional Connections Can Drive Change: Applying Fearless Change Patterns

The Fear Less pattern suggests that you can appreciate their opposition. Ask for Help from the skeptic because they see the innovation in a different way than you do - therefore, they may be able to provide useful information you haven’t considered. You will learn from them and, in the process, they may begin to shift from the act of resisting to rethinking. You may not be able to convince them and trying to do this will likely take more time than you have. But you can seek the places where you agree and, perhaps, create some unique ideas that begin with those points of agreement. Most importantly, when you ask for their thoughts on the upcoming change, they will begin to become involved in the initiative, rather than simply complaining on the sidelines. They will recognize you care about what they can contribute and, as one of our Fearless Change readers pointed out, it doesn’t make it as much fun for them to complain. You may even want to seek out some skeptics to become a Champion Skeptic, taking on the official role of pointing out flaws and challenges at strategic points throughout the change initiative.


India Data Protection Bill Approved, Despite Privacy Concerns

The bill specifically states that the data fiduciary shall give the data principal the option to access such request for consent in English or any language specified in the Eighth Schedule to the Constitution of India. That final part has proved to be a tricky point though, as a PwC insight called this a "much-debated mandatory localization" as the central government may notify such countries or territories outside India to which a data fiduciary may transfer personal data. Cavey says the concerns about the bill are that this draft is more relaxed than the previous draft, and that fiduciaries will have more power over the data principals. "Less protection means that detection and investigation will be harder for the regulatory body," he says. The bill also states that the central government holds the authority to select the members of the Personal Data Protection Board, thus compromising its independence. Cavey says this is a main concern about how the Data Protection Board operates, how independent it will be, and how it will work in conjunction with the government.


Using creative recruitment strategies to tackle the cybersecurity skills shortage

Traditionally, there’s been an assumption that to begin a career in cybersecurity, you must have a specialized education and resume. However, the expanding threat landscape has forced the industry to reconsider what makes great talent. This includes emphasizing soft skills and varied backgrounds above all else, especially when it comes to combating the next big threat. Internships and apprenticeships can then offer the additional training needed to build a successful cybersecurity career. Education should also be continuous in the cybersecurity field, so organizations must ensure they are making an active effort to train the next generation of the workforce. This consists of supporting their current employees and also encouraging their path to learn in the best way possible. External and internal internships and apprenticeships are key to achieving this. They not only create more awareness around what it actually takes to have a job in cybersecurity but also help those within and outside of organizations develop the necessary skills to meet the needs of the evolving threat landscape.



Quote for the day:

"Leadership is a journey, not a destination. It is a marathon, not a sprint. It is a process, not an outcome." -- John Donahoe

Daily Tech Digest - August 08, 2023

The Value of a Virtual Chief Information Security Officer

The value of vCISO services extends beyond technical expertise. It plays a vital role in raising awareness of various security incidents, threat detection and fostering a culture of cybersecurity within your organization. Through employee training and education programs, they empower your staff to identify and mitigate potential risks, ultimately strengthening your overall information security program and your security posture. Additionally, a vCISO helps you navigate the complexities of incident detection and incident response, and breach management. In the unfortunate event of a security incident, they can provide immediate support, guiding you through the necessary steps to contain the breach, minimize damage, and restore operations swiftly. This proactive approach to incident management and managed detection can save your business valuable time, money, and reputation. Lastly, a vCISO keeps a vigilant eye on the evolving cybersecurity landscape, constantly monitoring emerging threats, vulnerabilities, threat intelligence, and regulatory changes.


Engineering as Art: Embracing Creativity beyond Science

Spending years gaining experience and refining skills may constrain our imagination, creativity, and focus. Cultivating a "Beginner's Mind" suggests that embracing this mindset can lead to acquiring new abilities, making wiser choices, and fostering empathy. The essence of a Beginner's Mind lies in liberating ourselves from preconceived notions about the future, thus reducing the risk of stress or disappointment. Adopting a beginner's mindset proves beneficial for artists, allowing them to overcome creative blocks, initiate fresh ideas, and break free from self-imposed limitations. However, this mindset is broader than artists; engineers and less creative individuals can also benefit from it. Through years of dedicated practice and execution, our minds unconsciously develop recurring patterns, transforming them into mental shortcuts, rules, and best practices. I’ve had success getting into a beginner’s mindset over the years by avoiding pre-judgment when learning new technologies and working with a beginner in the domain.


AI as the next computing platform

In all its forms, AI is powerful because it spots and leverages patterns. This makes it a tool aiding one of humankind’s greatest cognitive skills. Pattern insight is the basis of the scientific method and the servicing of markets — our society’s twin cornerstones of innovation. For example, pattern-spotting AI is core to understanding how proteins fold, and it’s how a generative AI service trains on an LLM, deciding what to write next. Whether it’s humans or machines searching for patterns, and increasingly it will be both, the quality of the outcome depends on the quality of the data, to a point with rich, diverse and above all accurate data may be the single greatest driver of success. Serving this need will be a big business in the growth of the AI platform. Like its predecessors, the capabilities of the AI platform will improve, to a point where both employees and customers will expect accurate and timely information, more efficient use of resources, and personalization that changes depending on the context of the moment. Thus, it is a business not just of one pattern, but an intersection of several, at new levels of complexity and risk management.


Software services industry in transition

The companies share one problem as a common denominator: How do they transform their business model to address AI-enabled changes that seem to be moving at the speed of light? Especially in the last 6-9 months, ChatGPT has captivated global attention with its AI potential. ChatGPT, an AI Chatbot, acts like a human assistant answering questions based on human prompts. The tool is transforming the ideation and creative process in industries as diverse as advertising, marketing, and engineering. Another tool, GitHub Copilot, has revolutionised the field of AI-assisted code development by providing coding support in major software languages. Likewise, Databricks has released an AI tool that accepts English as input and outputs the needed code. These tools are available today for anyone to use. Customer service, which has long been supported by the Indian Business Process Outsourcing (BPO) industry, is already witnessing chatbots, touted as “the next big thing in technology”, being increasingly deployed in place of human agents.


Three Horizons of Your API Journey

APIs are designed and developed as part of the application and architecture planning process to integrate tightly with underlying systems, infrastructure and backend or data applications. This approach emphasizes the importance of well-defined, well-documented and reusable APIs with the goal of deploying them as the foundation for scalable and interoperable systems. ... These governance practices ensure consistent API design, security, versioning and life-cycle management across the organization, enabling efficient collaboration and integration with external stakeholders. Ideally, much of this is automated with baseline schemas set for API creation and policy types for different APIs classes. Because the API stack is flexible and loosely coupled, this horizon stage is where the platform ops team should evaluate new technologies that could help their organization improve their API systems — new formats like GraphQL, generative AI tools for automated and updated documentation and languages like Denon that generate API-friendly code out of the box.


Composable Enterprise – An Enterprise Architect View

By definition, composable enterprise focuses on modularity. Modularity means being able to recompose and compose the IT landscape. It is achieved by organizing data into small, discrete units used to create new data sets faster and effortlessly. Composable enterprise moves away from single, large, and complex applications to decoupled business procedures. These modular business procedures are modified into workflows for particular purposes and integrated across the organization’s technology stack. ... Once you have understood the ecosystem, it’s time to assess the composability need and identify the scope. Specifically, focus on areas that need composability the most. Ask questions such as “Where do I need a faster time-to-market?” Use the inventories generated in the first step, including value streams, customer journeys, and business capabilities. This will help you assess and determine where to improve time-to-market and efficiency. As a result, you can prioritize your composability efforts in those areas to optimize speed-to-market.


6 interview questions for agile tech leads

A tech team lead’s responsibilities can vary significantly across organizations and teams, with some expecting tech leads to be hands-on coding with the team, while others expect them to function as a solutions architect. Simon Metson, VP of engineering at EDB, recommends using a straightforward test to evaluate coding skills. “We use a simple, and deliberately so, coding test prior to the interview,” he says. “The resulting app, which should take an hour or two to complete, gives us something to discuss in the interview and assess how the candidate codes, solves problems, and communicates.” Metson says the test isn’t just about technical chops, and is more about how the candidate plans for scalability. “The question I like to ask is, how they’d scale out the application so that instead of running for one person, it’s used by millions. That’s a good test of how they approach complexity, what technologies they’re familiar with or interested in, and how they think about teams and crossing organizational boundaries.


Agile Planning With Generative AI

Generative AI will eventually impact the entire DevOps life cycle from plan to operate. I started as a developer but have been a product manager for most of my career; for me, the ‘Holy Grail of DevOps’ would be one where product managers (PMs) and business analysts (BAs) were able to define a future state of a business process and press a button to deliver it without any developers, designers or testers involved. This dream is not practical in the near term and is not really desirable in the long term, either. PMs and BAs are good at understanding the needs of users and translating them into features but aren’t interaction designers. ... So my dream is to build a team where the BAs can define the changes and a small team of very talented architects and interaction designers can realize those changes in 10% of the time it takes today without requiring a large team to implement the details. This is similar to what has happened in manufacturing where robots and numerically controlled machines are able to do the heavy lifting with the help of operators.


Has Microsoft cut security corners once too often?

It seems all but certain that the cybersecurity corner-cuttings that happened in the China attack were done by some mid-level manager. That manager was confident that opting for a slight cost reduction  would not be a job risk. Had there been a legitimate fear of getting fired or even just having their career advancement halted, that manager would have not chosen to violate security policy. The sad truth, though, is that the manager confidently knew that Microsoft values margin and market share far more than cybersecurity. Think of any company you believe takes cybersecurity seriously, such as RSA or Boeing. Would a manager there ever dare to openly violate cybersecurity rules? If this is all true, why don’t enterprises take their business elsewhere? This brings us back to the “you can’t get fired for hiring Microsoft” adage. If your enterprise uses the Microsoft cloud — or, for that matter, cloud services at Google or Amazon — and there’s a cybersecurity disaster, chances are excellent senior management will blame Microsoft.


Workplace monitoring needs worker consent, says select committee

While the government said in its AI whitepaper that it would empower existing regulators – including the HSE – to create tailored, context-specific rules that suit the ways AI is being used in the sectors they scrutinise, the Ada Lovelace Institute said in July 2023 that, because “large swathes” of the UK economy are either unregulated or only partially regulated, it is not clear who would be responsible for scrutinising AI deployments in a range of different contexts. Responding to the connected technologies report, Andrew Pakes, deputy general secretary of Prospect Union, said that although the monitoring of employees through various devices is becoming increasingly commonplace, regulation is lagging well behind implementation. “These are important recommendations from the Culture, Media and Sport committee report and would go some way to identifying the true scale of the issue, through government research, and catching up with the reality of worker surveillance. In particular, it is vital that workers are fully informed and involved in the design and use of monitoring software and what is being done with the data collected,” he said.



Quote for the day:

“When people go to work, they shouldn’t have to leave their hearts at home.” -- Betty Bender

Daily Tech Digest - August 06, 2023

California Opens Privacy Probe Into Car Data Collection

Modern vehicles are equipped with a wide range of sensors, cameras, and other technologies that generate vast amounts of data. This data includes information about the vehicle’s location, speed, acceleration, braking, and even driver behavior. Additionally, connected car systems can collect data on music preferences, navigation history, and other personal preferences. Car data is collected by various parties, including automakers, technology companies, and third-party service providers. This data is used for a variety of purposes, such as improving vehicle performance, developing new features, and providing personalized services to consumers. However, concerns have been raised about the potential misuse or unauthorized access to this sensitive information. The investigation by the California Privacy Agency highlights the importance of protecting consumer privacy in the context of car data collection. As vehicles become more connected and autonomous, the amount of data being generated increases exponentially. 


An eventful week in the world of Arm and RISC-V

What’s most intriguing though with all of these coincidental events though is the NXP Semiconductors’ announcement. Almost all the initial investor companies announced in the new, unnamed organization, are also Arm licensees. The press release states: “Semiconductor industry players Robert Bosch GmbH, Infineon Technologies AG, Nordic Semiconductor, NXP Semiconductors, and Qualcomm Technologies, Inc., have come together to jointly invest in a company aimed at advancing the adoption of RISC-V globally by enabling next-generation hardware development.” So, was this strategically timed to coincide with Arm’s annual meet? What’s also intriguing is that the announcement says a new company has been formed but the company isn’t named. Maybe the disclaimer is the added statement that “the company formation will be subject to regulatory approvals in various jurisdictions.” The new unnamed company formed in Germany also “calls on industry associations, leaders, and governments, to join forces in support of this initiative which will help increase the resilience of the broader semiconductor ecosystem.”


How Agile Management Disrupts the Status Quo

As a relatively newer project management methodology, you might wonder how agile differs from the typical or traditional project or team management approach an organization might use—and how it disrupts those traditional approaches. Agile principles are designed to allow for more seamless collaboration, feedback, and flexibility to ensure faster and more thorough success in bringing high-quality products to market. Agile methodology and coaching should focus on bringing together stakeholders, developers, programmers, and end-users to support the underlying principles. This management methodology encourages and facilitates ongoing conversations and regular communication as a primary means of measuring progress with incremental development. However, “incremental” movement doesn’t necessarily translate to slowing down the process. In fact, team member input—and, importantly, user input—ultimately allows for a more effective, functional, and satisfying final product.


A Journey Through Software Development Paradigms

In the quest for seamless collaboration and integration between development and operations, we encounter DevOps, a paradigm that bridges the gap between siloed teams and fosters a culture of continuous integration, delivery, and learning. We explore the triumphs and challenges faced by organizations adopting DevOps, witnessing its potential to accelerate software delivery, improve quality, and enhance customer experiences. Beyond the familiar shores of Agile and DevOps, our journey ventures into the uncharted territories of emerging paradigms, each holding the promise of further transformation. Lean Software Development, Continuous Delivery, and Site Reliability Engineering (SRE) await our exploration, revealing new insights and practices that continue to shape the future of software development. As we reach the culmination of our voyage, we stand in awe of the pioneers and visionaries who have paved the way for progress, embracing adaptation and innovation in the pursuit of excellence. 


The Rise of Emotionally Aware Technology: A Deep Dive into Global Affective Computing

One of the key drivers behind the rise of affective computing is the increasing demand for personalized user experiences. Today’s consumers expect their devices to understand their needs and preferences and to respond accordingly. Emotionally aware technology can meet these expectations by adapting its responses based on the user’s emotional state. For example, a virtual assistant that can detect frustration in a user’s voice could offer to simplify its instructions or provide additional support. Another factor contributing to the growth of affective computing is the advancement in machine learning and AI technologies. These technologies enable computers to learn from data and improve their performance over time, making it possible for them to recognize and interpret complex human emotions. For instance, facial recognition software can now analyze subtle facial expressions to determine a person’s mood, while natural language processing can interpret the emotional tone in written text.


Digital twins: The key to smart product development

In advanced industries, survey data indicate that almost 75 percent of companies have already adopted digital-twin technologies that have achieved at least medium levels of complexity. There is significant variance between sectors, however. Players in the automotive—and aerospace and defense—industries appear to be more advanced in their use of digital twins today, while logistics, infrastructure, and energy players are more likely to be developing their first digital-twin concepts. One major aerospace company is developing a machine-learning-based geometry optimization system that can simulate thousands of different configurations at high speed to identify weight savings, aerodynamic improvements, and other performance benefits. A European software company is building a multiphysics model of the human heart to support drug and medical-device development. In the United States, an automotive company is building a system that can model all the software and hardware configurations it offers. The system will be used to simulate the effect of design improvements before they are delivered to customers as over-the-air updates. 


Four technology disruptions organizations must watch

Digital humans are becoming more and more like real people. They are readily available and have the ability to interact over a screen to handle a service-based issue or provide customer service instantly. As digital human software is integrated with natural language processing and robotic process automation tools, digital humans will become more of a presence in workflows of more and more processes. Consulting leaders should focus, both singly and in tandem, with leaders of other parts of an organization, on crafting approaches their clients can use to leverage a digital human workforce. Service delivery leaders — particularly within business process outsourcing providers — should be developing a strategy to deploy digital humans within their service delivery functions. ... A decentralized autonomous organization (DAO) is a digital entity, running on a blockchain (which provides a secure digital ledger for communication tracking), that can engage in business interactions with other DAOs, digital and human agents, as well as corporations, without conventional human management.


Bitcoin Beyond the Currency – the Disruption of Industries

The Bitcoin economy has the potential to become the biggest economy in the world; bigger than the United States or China. Bitcoin is a solution for everyone in the world who lives in fear of inflation risk, currency risk, or regime risk. A global, decentralized, trustless settlement layer and means of exchange with no state backing or intervention. For that to happen, BTC has to be more than a store value, it has to be a currency. We have to stop thinking about it in terms of market capitalization and start thinking about it in terms of a gross decentralized product, the “GDP” of the Bitcoin economy. One doesn’t talk about the market capitalization of the dollar, we shouldn’t think of Bitcoin in those terms either. Bitcoin is continuing to become increasingly vital as legacy institutions fall behind the strides being made in the technology sector. These breakthroughs are significantly disrupting incumbent industries ranging from those commonly considered such as banking and finance, to more unique industries such as insurance and energy.


Mitigating AI Risks: Tips for Tech Firms in a Rapidly Changing Landscape

Keep in mind: despite their capabilities, large language models can’t tell between what’s real and what’s not. And when asked to verify if something is true, they “frequently invent dates, facts, and figures.” While this stresses the importance of fact-checking on the end-user’s part, you could still face a lawsuit for defamation if any misleading information is published or shared with the public. In fact, ChatGPT-creator OpenAI is already being sued for libel after the system made false accusations against a radio host in the United States, claiming that he had embezzled funds from a non-profit organization. This is the first case of this nature against OpenAI, which could test the legal viability of any future AI-related defamation lawsuits. However, some legal experts believe the case may be challenging to maintain since there were no actual damages and OpenAI wasn’t notified about the claims or given the opportunity to remove them. Beyond defamation, tech firms that deploy large language models in user support systems can also face general liability risks relating to physical harm.


Data Democratization’s Impact on Users and Governance

A key result of increased user involvement in the nuts and bolts of data is the increased importance of data literacy throughout the organization, Stodder added. “It’s essential for organizations to understand what their current capabilities are and to make a plan to address any stumbling block they’re having.” Training tailored to the full range of user personas, from advanced users to more basic data consumers, will be critical to any data democratization effort. ... Another critical aspect of a democratization effort is an effective governance program. “Organizations can easily expand their data programs faster than they expand their governance programs,” Stodder explained, “which, given the existing strain placed on governance by regulations and the complexity of the data landscape, can only compound the problems.” Some of these governance issues can also be exacerbated by the distributed nature of a democratized landscape. “Many organizations are trying to consolidate to a kind of hub-and-spoke model,” Stodder said, “which has been effective for many of them. 



Quote for the day:

“When something is important enough, you do it even if the odds are not in your favor.” --
Elon Musk

Daily Tech Digest - August 05, 2023

ESG & Climate Risk Management: Integrating Environmental Data In Ratings

Integrating environmental data into investment strategies allows investors to identify companies proactively addressing environmental challenges. It provides a comprehensive picture of a company's sustainability practices, particularly relevant in industries susceptible to climate risks, such as energy, agriculture, and transportation. By considering environmental data, investors can assess a company's resilience to climate change, regulatory changes, and physical risks such as extreme weather events. They can also evaluate how well a company aligns with global sustainability goals, such as the United Nations Sustainable Development Goals (SDGs). Moreover, environmental data integration helps investors identify companies capitalizing on emerging opportunities in the transition to a low-carbon economy. Companies with innovative solutions, efficient resource management, and clean technologies will likely thrive in a future characterized by climate-conscious policies and consumer preferences.


How New Tech Elevates Release Management’s Quality Standards

ML and AI technology can help with intelligently choosing when to deploy and roll back a release. In order to recommend deployment tactics, machine learning (ML) models can learn from previous deployment experiences, including success rates, user input and mistake patterns. In order to ensure a more dependable and error-resistant deployment process, AI algorithms can track the deployment process, evaluate the system’s health and initiate automated rollbacks if anomalies or severe concerns are discovered. In general, ML and AI provide valuable capabilities to release management, boosting testing and quality assurance, streamlining release planning, enabling continuous monitoring, analyzing release risks and simplifying wise deployment decisions. Utilizing these technologies enables businesses to improve software quality, streamline release management procedures, and deliver high-performing applications more effectively and reliably. 


Unleash the Power of Open Source: The SONiC Revolution

As businesses embrace connectivity from core to cloud to edge, network management becomes more complex. Open source networking, specifically SONiC, simplifies network management by providing a unified fabric across the entire network ecosystem. SONiC enables organizations to manage and control their networks seamlessly, regardless of the deployment location. This simplification allows for a more consistent and streamlined network management experience, reducing operational complexities and enhancing efficiency. In addition, SONiC is built on standards-based open APIs, providing organizations numerous management platform options to choose from, which is particularly useful for those already invested in other DevOps, Linux-based, or open source solutions. ... Emerging technologies like AI/ML, 5G, and the data boom at the edge are transforming business operations and service delivery. These technologies require a modern infrastructure built on containerized architecture, Ansible automation, and predictive AI/ML monitoring solutions. SONiC easily enables the adoption of these technologies, providing the foundation for next-generation networking.


The Art of Reducing 'Technology Debt' & Winning Digital Transformation Race

Almost every organization is undergoing digital transformation hence a noticeable surge in investments toward cloud automation, cloud computing, and cybersecurity services. However, in this scenario, not only the pace at which organizations should attempt transformation but also about choices they make during this journey is equally important. All transformations are not mandatory. In the IT industry, there is a term called 'technology debt,' which basically means that you are at a specific juncture because of decisions you took years ago. Technical debt is similar to dark matter; you can deduce its impact but cannot see or measure it. So for any company deliberating on large-scale digital transformation, it must ensure to partner with the right tech partners/vendors to get the right perspective on talent and capability. To strike a balance between reaping the benefits of transformation and ensuring security, companies must adopt a multi-faceted approach that can be led by experts effectively. Companies should thoroughly assess potential partners' cybersecurity measures, track records, and commitment to data protection.


Spare Some Change: Emotional Debt, Technical Debt, & Preparing for Change

The tricky thing about emotional debt, in particular, is that the collectors are unpredictable — you can defer payments for months or even years without a single letter or call, then you run into a health scare, an unexpected bill, or just an especially bad day, and all of these feelings that you’ve been deferring come crashing down on you all at once. And it’s important to recognize this emotional debt translates into workplace well-being; a recent Deloitte study indicates that many employees are struggling with unacceptably low levels of well-being. ... I had one of these moments recently. I ran headlong into a mental and emotional wall that I didn’t see coming at all, and at first, it made every part of life overwhelming. These are the moments when understanding and applying a strategic planning framework like I provided in my book, Building the Business of You, provides much-needed structure. I’ve never tackled some of the challenges I’m dealing with in my personal life before, but I have been a manager, which means I’ve helped create strategies to help to manage change and facilitate progress, so I know I can do that.


How to account for hidden costs of digital product development and offset them

Once your digital product is developed and deployed, the journey is far from over. Ongoing software maintenance and troubleshooting are crucial to ensuring your product's stability, security, and optimal performance. However, these aspects can bring forth hidden costs that may catch you off guard if not managed strategically. Software maintenance involves routine updates, bug fixes, security patches, and performance enhancements. Failing to allocate adequate resources to these tasks can lead to a deterioration of your product's functionality and user experience. Additionally, the need for troubleshooting arises when unforeseen issues or bugs emerge post-launch, demanding swift resolutions to avoid disruptions and customer dissatisfaction. ... Outsourcing grants you the flexibility to scale resources based on the current maintenance requirements, saving you from the burden of hiring and training additional staff. You can easily add or reduce resources as needed, ensuring cost-effectiveness and optimized productivity.


India’s Data Privacy Bill is Back

The Bill itself proposes data protection legislation that allows transfer and storage of personal data in some countries while raising the penalty for violations. It suggests consent before collecting personal data and provides stiff penalties to the tune of Rs.500 crore on those that fail to prevent data breaches. The Bill applies to processing digital personal data within the Indian territory and processing it outside of India if such processing is in connection with any profiling or offering goods or services to data principals within India. However, it does not apply to non-automated processing or processing for domestic or personal purposes by individuals and data contained in records that have been in existence for at least 100 years. ... On the issue of consent, the Bill notes that personal data of an individual can only be processed for a lawful purpose for which the concerned individual has given consent or is deemed to have given her consent. It mentions the consent should be free, specific, informed, and unambiguous.


AI Ethics Teams Lack ‘Support Resources & Authority’

Diplomatically approaching one product team after another in hopes of collaborating only gets ethics workers so far. They need some formal authority to require that problems be addressed, Ali says. “An ethics worker who approaches product teams on an equal footing can simply be ignored,” she says. And if ethics teams are going to implement that authority in the horizontal, nonhierarchical world of the tech industry, there need to be formal bureaucratic structures requiring ethics reviews at the very beginning of the product development process, Ali says. “Bureaucracy can set rules and requirements so that ethics workers don’t have to convince people of the value of their work.” Product teams also need to be incentivized to work with ethics teams, Ali says. “Right now, they are very much incentivized by moving fast, which can be directly counter to slowly, carefully, and responsibly examining the effects of your technology,” Ali says. Some interviewees suggested rewarding teams by giving them “ethics champion” bonuses when a product is made less biased or when the plug is pulled on a product that has a serious problem.


A 4-pronged strategy to cut SaaS sprawl

IT leaders must conduct regular software audits to identify and assess the usage of all SaaS applications across the organization. Kamal Goel, senior VP of IT at Hitachi Systems India, says, “Gather data on software adoption, utilization, and user feedback to determine which tools are genuinely adding value to the organization and which ones are underutilized or redundant. This analysis will help you make informed decisions about which subscriptions to keep, downgrade, or terminate.” At a previous employer, Goel’s IT department conducted a comprehensive audit of all SaaS applications and found that multiple teams were subscribed to a project management tool but most only used a fraction of its features. “By switching to a more focused and cost-effective project management tool and discontinuing the old one, the organization saved significant expenses without compromising productivity,” he says. Oftentimes, teams continue to rely on existing systems and processes while the quality of data and intelligence in the new system suffers resulting in its more advanced features becoming superfluous.


API Standardization and Its Role in Next-gen Networking

APIs are becoming more important because of the way applications are built, and services are delivered. Applications are composable, relying on integrated functional elements from different sources. A simple application for most businesses might need a mobile frontend, a link to a backend database, and a processing engine in between. And many might use data from third parties. Standardized APIs would ensure the elements worked together and developers did not have to start from scratch every time they built a new application. ... “Increasingly next-generation services like Network-as-a-Service (NaaS) solutions will be delivered across a system of many providers, and the networks supporting these services will be fully API-driven,” says Pascal Menezes, CTO, MEF. “For this to happen, standards-based automation is required throughout the entire system where all parties adopt a common, standardized set of APIs at both the business process and operational levels.”



Quote for the day:

"Leadership without mutual trust is a contradiction in terms." -- Warren Bennis

Daily Tech Digest - August 04, 2023

Cloud may be overpriced compared to on-premises systems

Public cloud computing prices have been creeping up because they are offered by for-profit companies that must generate a profit. Running a public cloud service is costly, and the billions invested over the past 12 years must show investors a return. That’s why prices have been increasing, not to mention the additional value cloud providers can offer, such as integrated AI, finops, operations, etc. At the same time, the cost of producing hardware, such as traditional HDD storage, has dropped to a new level of confusion. Now it’s a viable alternative to cloud-based storage systems. Thus, it’s not just a quick decision to pick cloud computing over traditional hardware now. ... Again, this is about being entirely objective when looking at all potential solutions, including cloud and on-premises. Cost being equal, cloud computing will be the better choice nine times out of 10, but now that the prices are very different, that may not be the case. If you’re the person making these calls, you must consider all aspects of these solutions, including future criteria. 


Leveraging Cloud DevOps to drive digital optimization and maximizing cloud benefits

Organizations need to consider several factors to ensure a successful implementation that delivers desired business value. They must begin by identifying the business drivers and putting in place a change management program. They must train or upskill their employees and focus on the structural and process changes required to foster collaboration between different functions. Companies must define measurable goals and KPIs and establish governance after considering the prevailing technology landscape and the future roadmap. . They must know and assess their existing infrastructure and portfolio, their requirements, the current challenges they face, the right cloud mix that’s appropriate to their needs, and so on. They must plan their resources and identify their security needs to ensure Cloud DevOps can meet these requirements. ... The success of Cloud DevOps can be measured along four primary dimensions – efficiency, agility, reliability, and quality.


Spatial Data Science: The Basics You Need to Know

“Spatial data science is data science on geospatial data -- location data, navigation data, GPS data, any data that is geocoded,” Kobielus explained. “Geospatial data science builds on and extends the capabilities of geographic information systems.” ... When asked about principal use cases for spatial data science, Kobielus suggested several possibilities. “A core and mainstream enterprise application for spatial data science has been address management. Customer information management needs to be integrated with permanent addressing which then is geocoded so that as your customers move around you always know what their actual address is.” Other possible uses include determining optimal locations for things such as retail outlets or manufacturing facilities, optimizing supply chain logistics, tracking inventory, personalizing user experiences on mobile devices, allowing businesses to provide targeted content, and indoor applications to help organizations optimally arrange things within warehouses or other indoor spaces.


Forecasting Team Performance

A useful technique is systems mapping. To do it, you first, identify qualitative factors and situations affecting your organization. By qualitative, I’m referring to what you can't see on a neat dashboard or chart — the things that only come up in casual 1:1 calls with people, or in water cooler conversations and healthy retrospectives. Next, think about second-order effects. These are the consequences of those qualitative factors, which have further implications and so on. Researchers have shown that second-order effects are a big blindside of the human brain; we think in terms of linear cause-and-effect and so often miss significant domino-effect repercussions. One way of making sure you’re sensitive to these second-order effects is to talk to people that have been in the organization for a long time. They're often the people who are most eager to share the types of stories that will help you draw connections between seemingly unrelated situations.


When Do Agile Teams Make Time for Innovation?

The unintended side effects of shorter sprints and sprint commitments can be devastating for creativity and breakthroughs. Teams that feel pressured by time or fear of failure aren't going to feel safe to experiment. In the absence of psychological safety, innovation recedes. It's critical that agile teams push back against this pressure to deliver and never fail. A recent Harvard Business Review article on psychological safety said, "In essence, agile’s core technology isn’t technical or mechanical. It’s cultural." Or as Entrepreneur.com put it, “Your company needs an innovation culture, not an innovation team. You don't become an innovative company by hiring a few people to work on it while everybody else goes through the motions” As agilists, we must fight to establish experimentation as part of our company culture. Companies that value innovation empower self-organizing teams to try new things, encourage (and fund) continuous learning and improvement, solicit and act on feedback and ideas, and emphasize collaboration and communication.


Microsoft attacked over ‘grossly irresponsible’ security practice

Yoran said the so-called shared responsibility model of cyber security espoused by public cloud providers, including Microsoft, was irretrievably broken if a provider fails to notify users of issues as they arise and apply fixes openly. He argued that Microsoft was quick to ask for its users’ trust and confidence, but in return they get “very little transparency and a culture of toxic obfuscation”. “How can a CISO, board of directors or executive team believe that Microsoft will do the right thing given the fact patterns and current behaviours? Microsoft’s track record puts us all at risk. And it’s even worse than we thought,” said Yoran. “Microsoft’s lack of transparency applies to breaches, irresponsible security practices and to vulnerabilities, all of which expose their customers to risks they are deliberately kept in the dark about,” he added. A Microsoft spokesperson said: “We appreciate the collaboration with the security community to responsibly disclose product issues. 


Managing Partnership Misfits

To get the right stakeholders on board and collaborating, project initiators must combine engagement and containment strategies. And to do this, they need more practical and nuanced guidance along with a new set of lenses through which to assess the suitability of potential partners and to identify, motivate, or control misfits. In other words, they need a tool to identify potential fault lines in future partnerships and to help iron out or contain misalignments. Based on in-depth studies of successful and unsuccessful partnerships, we propose a framework that tests partner fit across three dimensions: task-fit (what each party needs); goal-fit (what each party aims to achieve); and relationship-fit (how each party works). How potential partners measure up on these dimensions flags likely misalignments with a prospective partner and allows project initiators to design ways to overcome them. ... You are looking for a partner with the required capabilities or resources who values the expected gains, which are not just financial rewards, but could relate to learning, inspiration, or reputation. 


Comparing Different Vector Embeddings

In the simplest terms, vector embeddings are numerical representations of data. They are primarily used to represent unstructured data. Unstructured data are images, videos, audio, text, molecular images and other kinds of data that don’t have a formal structure. Vector embeddings are generated by running input data through a pretrained neural network and taking the output of the second-to-last layer. Neural networks have different architectures and are trained on different data sets, making each model’s vector embedding unique. That’s why working with unstructured data and vector embeddings is challenging. Later, we’ll see how models with the same base fine-tuned on different data sets can yield different vector embeddings. The differences in neural networks also mean that we must use distinct models to process diverse forms of unstructured data and generate their embeddings. For example, you can’t use a sentence transformer model to generate embeddings for an image. On the other hand, you wouldn’t want to use ResNet50, an image model, to generate embeddings for sentences.


Could C2PA Cryptography be the Key to Fighting AI-Driven Misinformation?

The C2PA specification is an open source internet protocol that outlines how to add provenance statements, also known as assertions, to a piece of content. Provenance statements might appear as buttons viewers could click to see whether the piece of media was created partially or totally with AI. Simply put, provenance data is cryptographically bound to the piece of media, meaning any alteration to either one of them would alert an algorithm that the media can no longer be authenticated. You can learn more about how this cryptography works by reading the C2PA technical specifications. This protocol was created by the Coalition for Content Provenance and Authenticity, also known as C2PA. Adobe, Arm, Intel, Microsoft and Truepic all support C2PA, which is a joint project that brings together the Content Authenticity Initiative and Project Origin. The Content Authenticity Initiative is an organization founded by Adobe to encourage providing provenance and context information for digital media. 


Multi-modal data protection with AI’s help

First, there is a malicious mind behind the scenes thinking and scheming on how to change a given message for exfiltration. That string for exfil is not intrinsically tied to a medium: it could go out over Wi-Fi, mobile, browser, print, FTP, SSH, AirDrop, steganography, screenshot, BlueTooth, PowerShell, buried in a file, over messaging app, in a conferencing app, through SaaS, in a storage service, and so on. A mind must consciously seek a method and morph the message to a new medium with an adversary and their toolkit in mind to succeed and, in this case, to get points in the hackathon. Second, a mind is required to recognize the string in its multiple forms or modes. Classic data loss prevention (DLP) and data protection works with blades that are disconnected from one another: a data type is searched for with unique search criteria and expected sampling data type and format. These can be simple, such as credit card numbers or social security numbers in HTTP, or complex like looking for data types that look like a contract in email attachments.



Quote for the day:

"Coaching isn't an addition to a leader's job, it's an integral part of it." -- George S. Odiorne