Daily Tech Digest - August 15, 2023

How to build employee trust as AI gains ground

Most experts agree, however, that newer AI tools are less about replacing people and more about eliminating mundane, manual, or number-crunching tasks that most employees already hate. In fact, the technology will mostly help free up workers to tackle more important tasks such as project management, data science research and, perhaps most importantly, creative thinking and problem solving. "There is no example today of an AI system that can perform data science totally independent of people," said Erick Brethenoux, a distinguished vice president analyst at research firm Gartner. A lot of the uncertainty and fear workers feel about generative AI tools is based on ignorance, experts say. AI, in its many forms, has been around for more than 50 years, but many people simply don’t recognize it’s been beside them all this time. “People have always been afraid of AI because the vision they have of it is science fiction; it’s a Hollywood vision of it,” Brethenoux said. “There’s a lot of hype around it."


Red Hat rivals form Open Enterprise Linux Association

At the heart of the new organization is a disagreement over the way Red Hat, long the dominant force in enterprise Linux, provides access to its source code. For years, the company supported the development of a Red Hat Enterprise Linux clone called CentOS, with the idea of providing a free alternative for testing and development purposes, given that paid support would be unnecessary for that purpose. However, increasingly, users began to implement CentOS instead of RHEL in production environments as well, with other companies, including CIQ, springing up to provide enterprise support. Accordingly, Red Hat stopped supporting CentOS in its previous form two years ago, in favor of an alternative called CentOS Stream. That, however, is an upstream distribution, meaning that it’s updated much more frequently, making it less suitable for production work. And earlier this summer, Red Hat made its source code less accessible, restricting access to paying Red Hat customers and obscuring some details of the way the code is put together to create the final distribution.


How FraudGPT presages the future of weaponized AI

FraudGPT signals the start of a new, more dangerous and democratized era of weaponized generative AI tools and apps. The current iteration doesn’t reflect the advanced tradecraft that nation-state attack teams and large-scale operations like the North Korean Army’s elite Reconnaissance General Bureau’s cyberwarfare arm, Department 121, are creating and using. But what FraudGPT and the like lack in generative AI depth, they more than make up for in ability to train the next generation of attackers. With its subscription model, in months FraudGPT could have more users than the most advanced nation-state cyberattack armies, including the likes of Department 121, which alone has approximately 6,800 cyberwarriors, according to the New York Times — 1,700 hackers in seven different units and 5,100 technical support personnel. While FraudGPT may not pose as imminent a threat as the larger, more sophisticated nation-state groups, its accessibility to novice attackers will translate into an exponential increase in intrusion and breach attempts, starting with the softest targets, such as in education, healthcare and manufacturing.


Application Rationalization: Is Complexity Avoidable?

Removing the clutter from your application portfolio is its own reward. Simplifying your software means: easier maintenance; greater agility; lower training requirements; reduced costs; faster rationalization in future. This is, indeed, all possible to achieve. With unlimited budget, and a willingness to both make tough choices about stripping back applications and be strict with your colleagues, you could of course remove all complexity from your portfolio. The question remains, however: should you? Fully optimizing your application portfolio is costly, time-consuming, and will likely cause a lot of frustration for software users along the way. True application rationalization involves a balancing act between technical debt and optimization, meaning some complexity will likely need to be tolerated. If your team communicates via Slack, for example, it would be easier to remove email and Zoom licenses. However, if your external stakeholders don't use Slack Connect, you could cripple your company's ability to function by doing so.


How to take action against AI bias

With AI adoption increasing rapidly, it’s critical that guardrails and new processes be put in place. Such guidelines establish a process for developers, data scientists, and anyone else involved in the AI production process to avoid potential harm to businesses and their customers. One practice enterprises can introduce before releasing any AI-enabled service is the red team versus blue team exercise used in the security field. For AI, enterprises can pair a red team and a blue team to expose bias and correct it before bringing a product to market. It’s important to then make this process an ongoing effort to continue to work against the inclusion of bias in data and algorithms. Organizations should be committed to testing the data before deploying any model, and to testing the model after it is deployed. Data scientists must acknowledge that the scope of AI biases is vast and there can be unintended consequences, despite their best intentions. Therefore, they must become greater experts in their domain and understand their own limitations to help them become more responsible in their data and algorithm curation.


3 Ways Enterprise Architects Can Bridge the Socio-Technical Gap

Software architecture is often a series of trade-offs. However, for people not involved in the original decision, it is often no longer clear what the trade-off was or how that trade-off led to the decision. One approach to capturing these decisions is Architecture Decision Records (ADRs). Note that ADRs are not some kind of technical rule, they are essentially a document. But having such a document can be a useful communication device, as it creates a history that allows people to keep track of trade-offs made in the past. The code and architecture themselves can only communicate the current state, but not how that current state came to be. Note that recording decisions doesn’t make them permanent or immutable. ... Capturing the rationale behind architectural decisions through methods like Architecture Decision Records ensures a clear understanding of trade-offs made over time. Additionally, addressing architecture incrementally, akin to code-level refinements, offers a practical way to manage risk and avoid conflicting priorities.


Broken Promises of the Low-Code Approach

The reality is that many low-code solutions present a fundamental misunderstanding of software development: They conflate the challenge of understanding a programming language’s syntax with the challenge of designing effective application logic. Programming languages are just tools; their syntax is merely a means of expressing solutions. The true heart of software development lies in problem-solving, in crafting algorithms, data structures and interfaces that efficiently fulfill the application’s needs. By aiming to simplify software development through a graphical user interface (GUI), low-code solutions replace syntax without necessarily simplifying the fundamental challenge of designing robust applications. This approach can introduce multiple drawbacks while failing to alleviate the true complexities of software creation, ultimately having a negative impact on your team’s ability to deliver real value. ... Low-code solutions frequently grapple with limited customization, often failing to meet specific, complex or unique business requirements. The risk of vendor lock-in is another significant downside, potentially leaving users high and dry if there are changes in pricing, feature offerings or if the vendor closes shop.


Micro transformation: Driving big business benefit through quick IT wins

While it’s still early days to determine the success of the micro transformation, the initial customer feedback has been encouraging, Aird says. “There’s something intrinsically rewarding when you hear directly from customers about how much they’re enjoying the new tool, how it’s adding value to their purchasing experience, and how it makes the process of creating their own neon signs easier and more fun and exciting.” This is critical because Custom Neon operates in a “highly saturated e-commerce niche,’’ he adds, and micro transformations such as upgrading the website tool “subtly, but surely redefine the customer experience, contributing to our continued growth and competitiveness.” This kind of micro transformation underscores the power of agile methodology, enabling IT to identify bottlenecks, implement targeted improvements, and quickly see the effects, Aird says. “Moreover, they allow us to enhance our KPIs, notably in customer satisfaction and operational efficiency.”


Cybersecurity hiring gap: Time to rethink who can contribute

Ford sees the "cybersecurity talent shortage" as misidentified, he refers to the situation as an "experience shortage." As we all know, the only way to garner experience is by doing. He opened doors to "overlooked" talent, with the creation of their Cybersecurity Career Reboot Program. The program's key factor probably broke every HR sorting tool, as they sought out individuals who had been passed over because the "lack the experience required to land entry-level jobs." ... They then used their Professional Rotation Experience Program (PREP), which took recent grads and put them in "two-year rotational program that includes global exposure to all our cybersecurity functions. PREP participants gain experience with the foundations of cybersecurity through hands-on project work, exposure to a variety of experiences, and innovative training and development, rotating through the different teams within cybersecurity every six months during the program." While the focus of homegrown talent programs is on the new and eager employees, CISOs must also keep an eye on retaining and improving the talent already in place.


Generative AI – What Are the Legal Issues?

The pace of the development of AI far outstrips the legal, regulatory and ethical frameworks which need to be put in place to ensure that the benefits of AI are carefully considered. For anyone looking at adopting or developing AI technologies, risk assessments should be conducted to identify and mitigate the impact on individuals. ... Considering the dataset used to teach the algorithm will potentially identify areas of risk. For example, an AI designed to sift CVs and provide hiring recommendations might inherit any unconscious hiring biases from the underlying dataset of ‘successful applicant’ and ‘unsuccessful applicant’ CVs. Not all algorithms are born equal and consideration should be given to the sophistication and development of any product before use given the potential impact on individuals. ... As Gen AI can create new content, who will own the intellectual property in any new work, media, image or music? There may be IP issues if the Gen AI creator did not have sufficient rights to the information used in the training dataset and any contract should clearly set out IP ownership where possible.



Quote for the day:

"It is the responsibility of leadership to provide opportunity, and the responsibility of individuals to contribute." -- William Pollard

Daily Tech Digest - August 11, 2023

How to tell if your cloud finops program is working

A successful finops program should ensure compliance with applicable financial regulations and industry standards. These change across industries, but a few industries, such as finance and health, are more constrained by rules than others. A good finops program will help your company stay current with relevant laws, rules, and regulations, such as GAAP (generally accepted accounting principles) or IFRS (International Financial Reporting Standards). Regular audits and reviews should be conducted to ensure that financial processes and practices align with the required standards and laws. These are often overlooked by cloud engineers and cloud architects building and deploying cloud-based systems since most of them don’t have a clue about regulations and laws beyond the basics. If done well, finops should take the stress off those groups and automate much of what needs to be monitored regarding regulatory compliance. I was early money on finops, and for good reason. We need to understand the value of cloud computing right after deployment and monitor its value continuously. 


Why Data Science Teams Should Be Using Pair Programming

Based on what we learn about the data from EDA, we next try to summarize a pattern we’ve observed, which is useful in delivering value for the story at hand. In other words, we build or “train” a model that concisely and sufficiently represents a useful and valuable pattern observed in the data. Arguably, this part of the development cycle demands the most “science” from data scientists as we continuously design, analyze and redesign a series of scientific experiments. We iterate on a cycle of training and validating model prototypes and make a selection as to which one to publish or deploy for consumption. Pairing is essential to facilitating lean and productive experimentation in model training and validation. With so many options of model forms and algorithms available, balancing simplicity and sufficiency is necessary to shorten development cycles, increase feedback loops and mitigate overall risk in the product team. As a data scientist, I sometimes need to resist the urge to use a sophisticated, stuffy algorithm when a simpler model fits the bill.


Should IT Reinvent Technical Support for IoT?

A first step is to advocate for IoT technology purchasing standards and to gain the support of upper management. The goal should be for the company to not purchase any IoT technology that fails to meet the company’s security, reliability, and interoperability standards, which IT must define. None of this can happen, of course, unless upper management supports it, so educating upper management on the risks of non-compliant IoT, a job likely to fall to the CIO, is the first thing that should be done. Next, IT should create a “no exceptions” policy for IoT deployment that is rigorously followed by IT personnel. This policy will make it a corporate security requirement to set all IoT equipment to enterprise security standards before any IoT gets deployed. Finally, IT needs a way to stretch its support and service capabilities at the edge without hiring more support personnel, since budgets are tight. If something goes wrong at your manufacturing plant in Detroit while technical issues arise at your San Diego, Atlanta, and Singapore facilities, it will be a challenge to resolve all issues simultaneously with equal force.


Why AI Forces Data Management to Up Its Game

With so much storage growth, organizations never reach the point where storage is no longer a constant challenge. The combination of massive capacity growth and democratized AI make it imperative to implement effective data management from the edge to the cloud. A strong foundation for artificial intelligence necessitates well-organized data stores and workflows. Many current AI projects are faltering due to a lack of data availability and poor Data Management. Skilled Data Management, then, has become a key factor in truly realizing the potential of AI. But it also plays a vital role in containing storage costs, hardening data security and cyber resiliency, verifying legal compliance and enhancing customer experiences, decision-making, and even brand reputation. ... Using metadata and global namespaces, the Data Management layer makes data accessible, searchable, and retrievable on whatever storage platform or media it may reside. It adds automation to facilitate tiering of data to long-term storage as well as cleansing data and alerting on anomalous conditions.


Hybrid work is entering the 'trough of disillusionment'

Even though remote and hybrid work practices are in the trough now, that doesn’t mean they’ll stay there. Some early adopters eventually overcome the initial hurdles and begin to see the benefits of innovation and best practices emerge. Until then, the return-to-office edicts continue to roll out. ... Even with an uptick in return-to-office mandates, office building occupancy continues to remain below pre-pandemic levels. The average weekly occupancy rate for 10 metropolitan areas in the United States this week was below 50% (48.6%), according to data tracked by workplace data company Kastle Systems. That occupancy rate is actually down 0.6% from last week. Office occupancy rates change substantially, depending on the day of the week. Tuesdays, Wednesdays and Thursday are the most popular in-office days. Globally and in the US, organizations have moved from ad hoc hybrid work policies, where employees could pick their days in the office, to structured schedules.


Cisco: Hybrid work needs to get better

While organisations in APAC have been progressive in adopting hybrid work arrangements, Patel cautioned them against making the mistake of mandating that employees work in the office all the time. “It’s much better to create a magnet than a mandate,” he said. “Give people a reason to come back to the office because when they collaborate in the office, there’s going to be this X factor that they don’t get when they are 100% remote.” Patel said adopting hybrid work would also help organisations recruit the best talent from anywhere in the world, enabling more people to participate equally in a global economy. “The opportunity is very unevenly distributed right now, but human potential is pretty evenly distributed, so it would be nice if anyone in a village in Bangladesh can have the same economic opportunity as someone in Silicon Valley. “Most of the time, the mindset is that you are distance-bound, so if you don’t happen to be in the same geography, then you don’t have access to opportunity. That’s a very archaic way of thinking and we need to think about this in a much more progressive manner,” he said.


Rethinking data analytics as a digital-first driver at Dow

The first step in this journey involved bringing our D&A teams under one roof in the first half of 2022. This team eventually became Enterprise D&A, with team members based around the world. To develop the strategy, we held discussions with external partners and interviewed Dow leaders to identify trends important to business success. Then we looked at where those trends align with key focus areas like customer engagement, accelerating innovation, market growth, reliability, sustainability, and the employee experience. Our central task was to translate our findings into a strategy that creates the most value for our stakeholders: our customers, our employees, our shareholders, and our communities. We determined we needed to move to a hub-and-spoke model. To make this work and achieve our vision of transforming data into a competitive advantage, we would need to build a strong culture of collaboration around D&A and support it with talent development within our organization and across the company.


Why data isn’t the answer to everything

What happens when you disagree with the AI? What are you then going to go and do? If you’re always going to disagree with it and do what you wanted to do anyway, then why bother bringing the AI in? Have you maybe mis-written your requirements and what that AI system is going to go and do for you? A lot of this is the foundational strategy on organisational design, people design, decision making. As an executive leader, it’s really easy to stand up on stage and say, ‘Here’s our 2050 vision or our 2030 vision.’ At the end of the day, an executive doesn’t do much, they just create the environment for things to happen. It’s frontline staff that make decisions. There are two reasons why you wouldn’t make a decision: you don’t have the right data and context or you don’t have the authority to make that decision. Typically, you only escalate a decision when you don’t have the data and context. It’s your manager that has more data and context, which enables that authority. So, with more data and context, I can push more authority and autonomy down to the frontline to actually go and drive transformation. 


Whirlpool malware rips open old Barracuda wounds

The vulnerability, according to a CISA alert, was used to plant malware payloads of Seapsy and Whirlpool backdoors on the compromised devices. While Seapsy is a known, persistent, and passive Barracuda offender masquerading as a legitimate Barracuda service "BarracudaMailService" that allows the threat actors to execute arbitrary commands on the ESG appliance, Whirlpool backdooring is a new offensive used by attackers who established a Transport Layer Security (TLS) reverse shell to the Command-and-Control (C2) server. "CISA obtained four malware samples -- including Seapsy and Whirlpool backdoors," the CISA alert said. "The device was compromised by threat actors exploiting the Barracuda ESG vulnerability." ... Whirlpool was identified as a 32-bit executable and linkable format (ELF) that takes two arguments (C2 IP and port number) from a module to establish a Transport Layer Security (TLS) reverse shell. A TLC reverse shell is a method used in cyberattacks to establish a secure communication channel between a compromised system and an attacker-controlled server.


How digital content security stays resilient amid evolving threats

AI technology advancements and the great opportunities it provides have also motivated business leaders and consumers to reassess the underlying trust models that have made the internet work for the past 40 years: every major advance in computing tech has stimulated sympathetic updates in the computer security industry, and this recent decisive move into a world powered by data, and auto-generated data, is no different. Provenance will become a key component in determining the trustworthiness of data. The changes though extend beyond technology. Rather than continuing to use systems that were built to assume trust and then verify, businesses and consumers will change and use verify then trust systems which will also bring mutual accountability into all processes where data is shared. Standards, open APIs and open-source software have proven to be adaptable to changing technology previously and will continue prove adaptable in the age of AI and significantly higher volumes of digital content.



Quote for the day:

"He who wishes to be obeyed must know how to command" -- Niccol_ Machiavelli

Daily Tech Digest - August 10, 2023

AMD's Zen architecture: The fundamentals of these Zen 4 CPUs

While the computing industry, CPU enthusiasts, and even AMD itself expected the road to performance leadership to be long, it was actually quite short. Zen 2, the successor to Zen, launched in 2019 and shocked pretty much everyone by blowing Intel out of the water. AMD racked up a massive lead in multi-threaded performance in pretty much every segment, had significantly better power efficiency in virtually every workload, and even surpassed Intel in single-threaded performance, which AMD hadn't been able to do for over a decade. From here, the road just got easier for AMD. The server market was (and still is) the most important area for AMD to make progress in, and by the time Zen 3 came out in 2020, AMD controlled 7% of the market, up from nearly 0% before Zen came out. This was made all the easier thanks to how Intel absolutely screwed up its plans to launch powerful 10nm CPUs, leaving AMD to face off against outdated and practically obsolete 14nm chips, which are some of the worst Intel has ever made.


Embracing the ‘Pedagogy of Error’ in Cybersecurity Education

The lesson I am always reminded of is that “we must abandon certainties in order to build from the challenge of uncertainty.” The deeper we delve into global instabilities and their challenges, the better perspectives and questions we can ask ourselves. It would be very sad to know that everything has been solved. Therefore, when we challenge current knowledge and explore different alternatives, we are opening up the possibility of seeing beyond what is known and, therefore, introducing something different. ... The academy must maintain and motivate curiosity, expectations, challenges and adventures that arise when uncertainty manifests itself from the inevitability of failure. In this sense, motivate the pedagogy of “error.” That is, understanding the “error” as part of the process and not as a result is what makes it possible to create cybersecurity and IT professionals open to constantly learn, to let themselves be questioned in their previous knowledge and to maintain a proactive stance in the face of adversaries’ challenges.


The dark side of the cloud: How cloud is becoming prey to sophisticated forms of cyber attack

As businesses increasingly adopt cloud-based solutions, cyber criminals—who are constantly looking for new vulnerabilities to exploit—are finding it easier to engineer data breaches, explains Rajesh Garg, EVP, Chief Digital Officer & Head of Applications & Cybersecurity at data centre service provider Yotta Data Services. Around 98 per cent of organisations globally now utilise some form of cloud-based tech, while many have adopted multi-cloud deployments from multiple cloud service providers. The massive adoption of the cloud environment has also given rise to Shadow IT, where employees or departments use hardware or software from external sources without the knowledge of the IT or security group of the organisation. This creates a vacuum, where the responsibility of managing security within organisations is not clearly defined. “Cloud infrastructure is inherently complex; that increases manifold with the addition of hybrid and multiple-cloud models,” says Atul Gupta


Google Cloud launches Chronicle CyberShield to help government agencies tackle threats

A primary component of Chronicle CyberShield is establishing a modern government security operations center (SOC), comprising a network of interconnected SOCs to scale and aggregate security threats, Google Cloud said in a press release. Chronicle CyberShield enables governments to leverage cyber threat intelligence from Google and Mandiant, now part of Google Cloud, to build a scalable and centralized threat intelligence and analysis capability, according to the firm. This is integrated operationally into the government SOC to identify suspicious indicators and enrich the context for known vulnerabilities. The solution also allows governments to build a coordinated monitoring capability with Chronicle SIEM to simplify threat detection, investigation, and hunting with the intelligence, speed, and scale of Google. By implementing Chronicle across a network of SOCs, attack patterns and correlated threat activity across multiple entities are available for investigation and analysis. 


International implications of hack-for-hire services

A lack of consequences for hackers that contract themselves out to foreign clients has only encouraged the hack-for-hire industry in India. US prosecutors indicted Sumit Gupta, the Director of Indian hacking firm BellTroX in 2015 for hacking on behalf of two American lawyers, yet the Indian government never took action against him. After he failed to be convicted in 2015, BellTroX went on to commit the Dark Basin hacks in 2020. BellTroX also surfaced as part of a criminal case against an Israeli private detective who hired Indian hacking firms on behalf of unnamed clients in Israel, Europe, and the US. The private detective pleaded guilty in 2022, but the hackers in India have yet to face any legal consequences. BellTroX also surfaced as part of a criminal case against an Israeli private detective who hired Indian hacking firms on behalf of unnamed clients in Israel, Europe, and the US. This lack of enforcement is not because India does not have the legal infrastructure to prosecute cybercrimes; the Information Technology Act of 2000, and its subsequent amendments in 2008 


Windows Defender-Pretender Attack Dismantles Flagship Microsoft EDR

In studying the Windows Defender update process, Bar and Attias discovered that signature updates are typically contained in a single executable file called the Microsoft Protection Antimalware Front End (MPAM-FE[.]exe). The MPAM file in turn contained two executables and four additional Virtual Device Metadata (VDM) files with malware signatures in compressed — but not encrypted — form. The VDM files worked in tandem to push signature updates to Defender. The researchers discovered that two of the VDM files were large sized "Base" files that contained some 2.5 million malware signatures, while the other two were smaller-sized, but more complex, "Delta" files. They determined the Base file was the main file that Defender checked for malware signatures during the update process, while the smaller Delta file defined the changes that needed to be made to the Base file. Initially, Bar and Attias attempted to see if they could hijack the Defender update process by replacing one of the executables in the MPAM file with a file of their own. 


Securing The Future: Embracing Cloud-Centric Cybersecurity Strategies

Upskilling an entire cybersecurity organization is a significant undertaking that requires planning, time, funding and—most importantly—leadership buy-in. CISOs won't be able to snap their fingers and transform their teams into the cloud-literate leaders of tomorrow. After all, it could take up to six months of training just to have an intelligent-sounding conversation about the cloud—least of all, be productive. Fortunately, much of the educational infrastructure necessary for upskilling workforces is available. Cloud service providers AWS, Microsoft Azure and Google Cloud each have a portfolio of cloud computing certifications. Platforms such as A Cloud Guru and Cloud Academy offer multi-cloud training. Security-focused cloud training and certifications are available from organizations such as the SANS Institute, (ISC)2 and the Cloud Security Alliance. ... These senior leaders are generally no longer "hands on keyboard" professionals. They lead programs, set priorities and assign goals. Of course, they need to be conversant with the technology their organization uses. 


Northern Ireland Police at Risk After Serious Data Breach

"This is the most serious breach I have ever seen, due to the potential it could lead to the death or injury of those whose data has been disclosed," said Brian Honan, who heads Dublin-based cybersecurity firm BH Consulting. Exposed information could be abused not only by criminals, including for revenge, but also by republican paramilitaries who continue to target police officers and employees. The most recent attack occurred in February, when off-duty senior detective John Caldwell was shot in a sports complex in Omagh. He survived with "life-changing" injuries, said the chairman of Northern Ireland's Police Federation. Authorities arrested 11 people and charged three with being members of a proscribed terrorist group - in this case, the New IRA, a splinter of the Provisional Irish Republican Army that rejects a final 1997 terrorism cease-fire that helped lead to the 1998 Good Friday Agreement. The PSNI says it is working to "to identify any security issues" posed by the breach as quickly as possible, and it has notified the Information Commissioner's Office.


Ethics as a process of reflection and deliberation

You can integrate ethics into your projects by organising a process of ethical reflection and deliberation. You can organise a three-step process for that:Put the issues or risks on the table – things that you are concerned about, things that might go wrong. Organise conversations to look at those issues or risks from different angles – you can do this in your project team, but also with people from outside your organisation. Make decisions, preferably in an iterative manner – you take measures, try them out, evaluate outcomes, and adjust accordingly. A key benefit of such a process is that you can be accountable; you have looked at issues, discussed them with various people, and have taken measures. Practically, you can organise such a process in a relatively lightweight manner, e.g., a two-hour workshop with your project team. Or you can integrate ethical reflection and deliberation in your project, e.g., as a recurring agenda item in your monthly project meetings, and involve various outside experts on a regular basis.


6 legal ‘gotchas’ that could sink your CIO career

You might be thinking that your company will defend you for liability, and you might be right if your company has liability coverage for its officers, and you are an officer. But does your company have liability insurance for its executives? It’s standard for most Fortune 500 companies to have liability insurance for their executives, but a substantial number of private and not-for-profit companies are facing challenges in rising premiums and may not have liability protection. If you’re interviewing for a CIO job, it’s prudent to find out whether the company you’re interviewing with offers liability protection and indemnification insurance for its executives. ... When CIOs are sued or fired, it’s often because of a significant cybersecurity breach. The reason for this is because CIOs are ultimately responsible for safeguarding corporate information. When a breach occurs, it is always perceived as being on the CIO’s watch, and the repercussions can be severe. 



Quote for the day:

"We learn by example and by direct experience because there are real limits to the adequacy of verbal instruction." -- Malcolm Gladwell

Daily Tech Digest - August 09, 2023

You can’t run away from technical debt

It could be poor architecture because IT leaders picked the less efficient path to a solution. Perhaps they went with a specific vendor, even a cloud provider, for the wrong reasons, such as a preexisting relationship. This led to a solution that functions but adds instead of removes technical debt. I’ve heard the excuses: A decision was made to expedite solution delivery for an urgent business purpose. However, that’s almost never the case. Most of the time technical debt accumulates from misguided decisions; the company could have gone in a direction that did not create technical debt but did not. Indeed, many of the better solutions would have cost less money and taken less time to deploy. In other words, most of the technical debt is a collection of self-inflicted wounds, usually caused by leaders who don’t bother to understand the bigger picture and take technological shots in the dark. Of course, “it works,” but it significantly increases technical debt. I’ve second-guessed a great many of these in my 40-year career.


Australia’s Banking Industry Mulls Better Cross-Collaboration to Defeat Scam Epidemic

The Australian banking sector, for its part, has already been looking for ways to work together to combat fraud. In May, 17 banks announced that, thanks to a collaboration between them, they had been able to halve the time it takes to identify and block payments to scam operators. This effort is powered by the ABA’s Fraud Reporting Exchange. This initiative cross-matches data between participating banks and allows for nearly real-time communication of fraudulent transactions across the network. Other government initiatives, meanwhile, include the new National Anti-Scams Centre, which went live on July 1. This organization will enable faster sharing of information, so police and regulators can act on scams more quickly. There will also be an Australian Sender SMS ID registry that will provide a “whitelist” of phone numbers that can be used to block scam calls and SMS messages that supposedly come from government agencies.


6 ways CIOs sabotage their IT consultant’s success

Here’s a promise made during negotiations that’s often DOA once the project starts: The client will provide the consultant with the information necessary for the project to move forward. Of course, once the project starts, it turns out that nobody in the client organization can provide that information. Why would the client make a promise like this? One reason: Whoever in the client organization is responsible for providing the information isn’t willing to admit that they can’t, either to their boss or to the consultants. In the short term it’s safer to make the promise and kick the can down the road, until the project has been going on long enough to shift the blame to those damned consultants who keep on making unrealistic requests of IT staff who are already overworked and underpaid. (Take a deep breath.) There’s another reason some clients can’t deliver information on demand: They’ve outsourced the IT functional area responsible for the information needed, and the outsourcer isn’t willing to help out consultants they see as likely competitors.


Technical vs. Adaptive Leadership

While technical leadership is essential, it does come with limitations. Relying solely on technical prowess can lead to a narrow focus, overlooking broader organizational dynamics and human factors. Additionally, in an ever-changing environment, technical skills can become outdated, necessitating a constant commitment to learning and adapting. Adaptive leadership, on the other hand, revolves around the ability to navigate uncertainty, ambiguity, and change. It is a leadership approach that focuses on guiding teams and organizations through transformational periods. Adaptive leaders are skilled at fostering resilience, encouraging creative problem-solving, and inspiring a culture of continuous learning. Adaptive leaders excel in communication and emotional intelligence. They possess the capacity to connect with their teams on a deeper level, empathizing with their challenges and aspirations. This ability to understand and relate to individuals creates an environment of trust, openness, and collaboration. 


Why big tech shouldn’t dictate AI regulation

Formed initially of Anthropic, Google, Microsoft, and OpenAI, the Forum is presented as an industry body which will ensure the ‘safe and responsible development of frontier AI models’. While not defined by the Forum’s initial press release, ‘frontier AI models’ can be understood to be general-purpose AI models which, in the words of the Ada Lovelace Institute, ‘have newer or better capabilities’ than other models. The forum’s objectives include undertaking AI safety research; disseminating best practices to developers; and collaborating with parties like academics, policymakers, and civil society bodies to influence the design and implementation of AI ‘guardrails’. Membership, meanwhile, will be restricted to organisations which (in the Forum’s eyes) both develop frontier models, and are committed to improving their safety. Admittedly, questions around the safe and effective development of AI will not arrive without investment, so it is encouraging to see a commitment to this collaborative approach amongst prominent AI vendors. Likewise, effective AI regulation will rely on input from those with real domain expertise: the industry’s doors must remain open to governments and policymakers.


Introduction to Apache Arrow

Apache Arrow is a framework for defining in-memory columnar data that every processing engine can use. It aims to be the language-agnostic standard for columnar memory representation to facilitate interoperability. Several open source leaders from companies also working on Impala, Spark and Calcite developed it. Among the co-creators is Wes McKinney, creator of Pandas, a popular Python library used for data analysis. He wanted to make Pandas interoperable with data processing systems, a problem that Arrow solves. ... Another benefit of Apache Arrow is its integration with Apache Arrow Flight SQL. Having an efficient in-memory data representation is important for reducing memory requirements and CPU and GPU load. However, without the ability to transfer this data across networked services efficiently, Apache Arrow wouldn’t be that appealing. Luckily Apache Arrow Flight SQL solves this problem. Apache Arrow Flight SQL is a “new client-server protocol developed by the Apache Arrow community for interacting with SQL databases that makes use of the Arrow in-memory columnar format and the Flight RPC framework.”


How to develop an intrapreneurial culture

A company that wants to inspire intrapreneurship needs to have the ability to mobilize resources across the organization to support the opportunities it surfaces, which can carry execution and reputational risks. But because of the substantial potential upsides, encouraging intrapreneurship should be central to an organization’s mission. Take the example of the Happy Meal, which has been pivotal to the growth of McDonald’s: the idea came from a maverick internal team. The Sony PlayStation became the first gaming console to ship over 100 million units—though it required internal champions to pick up the pieces from a failed external partnership. Southwest Airlines’ humorous safety announcements—pioneered by the airline’s founder as an integral part of the business model—have enhanced its customer experience and business. When intrapreneurship is encouraged, there’s evidence that people enjoy greater autonomy and a stronger connection to the organization’s purpose; not surprisingly, this leads to higher productivity and engagement. What does it take to develop more of this culture, and then to apply it? It’s not an exact science, but there are ways to give your intrapreneurs a leg up.


How Emotional Connections Can Drive Change: Applying Fearless Change Patterns

The Fear Less pattern suggests that you can appreciate their opposition. Ask for Help from the skeptic because they see the innovation in a different way than you do - therefore, they may be able to provide useful information you haven’t considered. You will learn from them and, in the process, they may begin to shift from the act of resisting to rethinking. You may not be able to convince them and trying to do this will likely take more time than you have. But you can seek the places where you agree and, perhaps, create some unique ideas that begin with those points of agreement. Most importantly, when you ask for their thoughts on the upcoming change, they will begin to become involved in the initiative, rather than simply complaining on the sidelines. They will recognize you care about what they can contribute and, as one of our Fearless Change readers pointed out, it doesn’t make it as much fun for them to complain. You may even want to seek out some skeptics to become a Champion Skeptic, taking on the official role of pointing out flaws and challenges at strategic points throughout the change initiative.


India Data Protection Bill Approved, Despite Privacy Concerns

The bill specifically states that the data fiduciary shall give the data principal the option to access such request for consent in English or any language specified in the Eighth Schedule to the Constitution of India. That final part has proved to be a tricky point though, as a PwC insight called this a "much-debated mandatory localization" as the central government may notify such countries or territories outside India to which a data fiduciary may transfer personal data. Cavey says the concerns about the bill are that this draft is more relaxed than the previous draft, and that fiduciaries will have more power over the data principals. "Less protection means that detection and investigation will be harder for the regulatory body," he says. The bill also states that the central government holds the authority to select the members of the Personal Data Protection Board, thus compromising its independence. Cavey says this is a main concern about how the Data Protection Board operates, how independent it will be, and how it will work in conjunction with the government.


Using creative recruitment strategies to tackle the cybersecurity skills shortage

Traditionally, there’s been an assumption that to begin a career in cybersecurity, you must have a specialized education and resume. However, the expanding threat landscape has forced the industry to reconsider what makes great talent. This includes emphasizing soft skills and varied backgrounds above all else, especially when it comes to combating the next big threat. Internships and apprenticeships can then offer the additional training needed to build a successful cybersecurity career. Education should also be continuous in the cybersecurity field, so organizations must ensure they are making an active effort to train the next generation of the workforce. This consists of supporting their current employees and also encouraging their path to learn in the best way possible. External and internal internships and apprenticeships are key to achieving this. They not only create more awareness around what it actually takes to have a job in cybersecurity but also help those within and outside of organizations develop the necessary skills to meet the needs of the evolving threat landscape.



Quote for the day:

"Leadership is a journey, not a destination. It is a marathon, not a sprint. It is a process, not an outcome." -- John Donahoe

Daily Tech Digest - August 08, 2023

The Value of a Virtual Chief Information Security Officer

The value of vCISO services extends beyond technical expertise. It plays a vital role in raising awareness of various security incidents, threat detection and fostering a culture of cybersecurity within your organization. Through employee training and education programs, they empower your staff to identify and mitigate potential risks, ultimately strengthening your overall information security program and your security posture. Additionally, a vCISO helps you navigate the complexities of incident detection and incident response, and breach management. In the unfortunate event of a security incident, they can provide immediate support, guiding you through the necessary steps to contain the breach, minimize damage, and restore operations swiftly. This proactive approach to incident management and managed detection can save your business valuable time, money, and reputation. Lastly, a vCISO keeps a vigilant eye on the evolving cybersecurity landscape, constantly monitoring emerging threats, vulnerabilities, threat intelligence, and regulatory changes.


Engineering as Art: Embracing Creativity beyond Science

Spending years gaining experience and refining skills may constrain our imagination, creativity, and focus. Cultivating a "Beginner's Mind" suggests that embracing this mindset can lead to acquiring new abilities, making wiser choices, and fostering empathy. The essence of a Beginner's Mind lies in liberating ourselves from preconceived notions about the future, thus reducing the risk of stress or disappointment. Adopting a beginner's mindset proves beneficial for artists, allowing them to overcome creative blocks, initiate fresh ideas, and break free from self-imposed limitations. However, this mindset is broader than artists; engineers and less creative individuals can also benefit from it. Through years of dedicated practice and execution, our minds unconsciously develop recurring patterns, transforming them into mental shortcuts, rules, and best practices. I’ve had success getting into a beginner’s mindset over the years by avoiding pre-judgment when learning new technologies and working with a beginner in the domain.


AI as the next computing platform

In all its forms, AI is powerful because it spots and leverages patterns. This makes it a tool aiding one of humankind’s greatest cognitive skills. Pattern insight is the basis of the scientific method and the servicing of markets — our society’s twin cornerstones of innovation. For example, pattern-spotting AI is core to understanding how proteins fold, and it’s how a generative AI service trains on an LLM, deciding what to write next. Whether it’s humans or machines searching for patterns, and increasingly it will be both, the quality of the outcome depends on the quality of the data, to a point with rich, diverse and above all accurate data may be the single greatest driver of success. Serving this need will be a big business in the growth of the AI platform. Like its predecessors, the capabilities of the AI platform will improve, to a point where both employees and customers will expect accurate and timely information, more efficient use of resources, and personalization that changes depending on the context of the moment. Thus, it is a business not just of one pattern, but an intersection of several, at new levels of complexity and risk management.


Software services industry in transition

The companies share one problem as a common denominator: How do they transform their business model to address AI-enabled changes that seem to be moving at the speed of light? Especially in the last 6-9 months, ChatGPT has captivated global attention with its AI potential. ChatGPT, an AI Chatbot, acts like a human assistant answering questions based on human prompts. The tool is transforming the ideation and creative process in industries as diverse as advertising, marketing, and engineering. Another tool, GitHub Copilot, has revolutionised the field of AI-assisted code development by providing coding support in major software languages. Likewise, Databricks has released an AI tool that accepts English as input and outputs the needed code. These tools are available today for anyone to use. Customer service, which has long been supported by the Indian Business Process Outsourcing (BPO) industry, is already witnessing chatbots, touted as “the next big thing in technology”, being increasingly deployed in place of human agents.


Three Horizons of Your API Journey

APIs are designed and developed as part of the application and architecture planning process to integrate tightly with underlying systems, infrastructure and backend or data applications. This approach emphasizes the importance of well-defined, well-documented and reusable APIs with the goal of deploying them as the foundation for scalable and interoperable systems. ... These governance practices ensure consistent API design, security, versioning and life-cycle management across the organization, enabling efficient collaboration and integration with external stakeholders. Ideally, much of this is automated with baseline schemas set for API creation and policy types for different APIs classes. Because the API stack is flexible and loosely coupled, this horizon stage is where the platform ops team should evaluate new technologies that could help their organization improve their API systems — new formats like GraphQL, generative AI tools for automated and updated documentation and languages like Denon that generate API-friendly code out of the box.


Composable Enterprise – An Enterprise Architect View

By definition, composable enterprise focuses on modularity. Modularity means being able to recompose and compose the IT landscape. It is achieved by organizing data into small, discrete units used to create new data sets faster and effortlessly. Composable enterprise moves away from single, large, and complex applications to decoupled business procedures. These modular business procedures are modified into workflows for particular purposes and integrated across the organization’s technology stack. ... Once you have understood the ecosystem, it’s time to assess the composability need and identify the scope. Specifically, focus on areas that need composability the most. Ask questions such as “Where do I need a faster time-to-market?” Use the inventories generated in the first step, including value streams, customer journeys, and business capabilities. This will help you assess and determine where to improve time-to-market and efficiency. As a result, you can prioritize your composability efforts in those areas to optimize speed-to-market.


6 interview questions for agile tech leads

A tech team lead’s responsibilities can vary significantly across organizations and teams, with some expecting tech leads to be hands-on coding with the team, while others expect them to function as a solutions architect. Simon Metson, VP of engineering at EDB, recommends using a straightforward test to evaluate coding skills. “We use a simple, and deliberately so, coding test prior to the interview,” he says. “The resulting app, which should take an hour or two to complete, gives us something to discuss in the interview and assess how the candidate codes, solves problems, and communicates.” Metson says the test isn’t just about technical chops, and is more about how the candidate plans for scalability. “The question I like to ask is, how they’d scale out the application so that instead of running for one person, it’s used by millions. That’s a good test of how they approach complexity, what technologies they’re familiar with or interested in, and how they think about teams and crossing organizational boundaries.


Agile Planning With Generative AI

Generative AI will eventually impact the entire DevOps life cycle from plan to operate. I started as a developer but have been a product manager for most of my career; for me, the ‘Holy Grail of DevOps’ would be one where product managers (PMs) and business analysts (BAs) were able to define a future state of a business process and press a button to deliver it without any developers, designers or testers involved. This dream is not practical in the near term and is not really desirable in the long term, either. PMs and BAs are good at understanding the needs of users and translating them into features but aren’t interaction designers. ... So my dream is to build a team where the BAs can define the changes and a small team of very talented architects and interaction designers can realize those changes in 10% of the time it takes today without requiring a large team to implement the details. This is similar to what has happened in manufacturing where robots and numerically controlled machines are able to do the heavy lifting with the help of operators.


Has Microsoft cut security corners once too often?

It seems all but certain that the cybersecurity corner-cuttings that happened in the China attack were done by some mid-level manager. That manager was confident that opting for a slight cost reduction  would not be a job risk. Had there been a legitimate fear of getting fired or even just having their career advancement halted, that manager would have not chosen to violate security policy. The sad truth, though, is that the manager confidently knew that Microsoft values margin and market share far more than cybersecurity. Think of any company you believe takes cybersecurity seriously, such as RSA or Boeing. Would a manager there ever dare to openly violate cybersecurity rules? If this is all true, why don’t enterprises take their business elsewhere? This brings us back to the “you can’t get fired for hiring Microsoft” adage. If your enterprise uses the Microsoft cloud — or, for that matter, cloud services at Google or Amazon — and there’s a cybersecurity disaster, chances are excellent senior management will blame Microsoft.


Workplace monitoring needs worker consent, says select committee

While the government said in its AI whitepaper that it would empower existing regulators – including the HSE – to create tailored, context-specific rules that suit the ways AI is being used in the sectors they scrutinise, the Ada Lovelace Institute said in July 2023 that, because “large swathes” of the UK economy are either unregulated or only partially regulated, it is not clear who would be responsible for scrutinising AI deployments in a range of different contexts. Responding to the connected technologies report, Andrew Pakes, deputy general secretary of Prospect Union, said that although the monitoring of employees through various devices is becoming increasingly commonplace, regulation is lagging well behind implementation. “These are important recommendations from the Culture, Media and Sport committee report and would go some way to identifying the true scale of the issue, through government research, and catching up with the reality of worker surveillance. In particular, it is vital that workers are fully informed and involved in the design and use of monitoring software and what is being done with the data collected,” he said.



Quote for the day:

“When people go to work, they shouldn’t have to leave their hearts at home.” -- Betty Bender