Daily Tech Digest - July 05, 2023

AI gold rush makes basic data security hygiene critical

APIs, in particular, are hot targets as they are widely used today and often carry vulnerabilities. Broken object level authorization (BOLA), for instance, is among the top API security threats identified by Open Worldwide Application Security Project. In BOLA incidents, attackers exploit weaknesses in how users are authenticated and succeed in gaining API requests to access data objects. Such oversights underscore the need for organizations to understand the data that flows over each API, Ray said, adding that this area is a common challenge for businesses. Most do not even know where or how many APIs they have running across the organization, he noted. There is likely an API for every application that is brought into the business, and the number further increases amid mandates for organizations to share data, such as healthcare and financial information. Some governments are recognizing such risks and have introduced regulations to ensure APIs are deployed with the necessary security safeguards, he said. And where data security is concerned, organizations need to get the fundamentals right. 


Microsoft pushes for government regulation of AI. Should we trust it?

By focusing on legislation for the dramatic-sounding but faraway potential apocalyptic risks posed by AI, Altman wants Congress to pass important-sounding, but toothless, rules. They largely ignore the very real dangers the technology presents: the theft of intellectual property, the spread of misinformation in all directions, job destruction on a massive scale, ever-growing tech monopolies, loss of privacy and worse. If Congress goes along, Altman, Microsoft and others in Big Tech will reap billions, the public will remain largely unprotected, and elected leaders can brag about how they’re fighting the tech industry by reining in AI. At the same hearing where Altman was hailed, New York University professor emeritus Gary Marcus issued a cutting critique of AI, Altman, and Microsoft. He told Congress that it faces a “perfect storm of corporate irresponsibility, widespread deployment, lack of regulation and inherent unreliability.” He charged that OpenAI is “beholden” to Microsoft, and said Congress shouldn’t follow his recommendations.


Ghostscript bug could allow rogue documents to run system commands

The problem came about because Ghostscript’s handling of filenames for output made it possible to send the output into what’s known in the jargon as a pipe rather than a regular file. Pipes, as you will know if you’ve ever done any programming or script writing, are system objects that pretend to be files, in that you can write to them as you would to disk, or read data in from them, using regular system functions such as read() and write() on Unix-type systems, or ReadFile() and WriteFile() on Windows… …but the data doesn’t actually end up on disk at all. Instead, the “write” end of a pipe simply shovels the output data into a temporary block of memory, and the “read” end of it sucks in any data that’s already sitting in the memory pipeline, as though it had come from a permanent file on disk. This is super-useful for sending data from one program to another. When you want to take the output from program ONE.EXE and use it as the input for TWO.EXE, you don’t need to save the output to a temporary file first, and then read it back in using the > and < characters for file redirection


Island Enterprise Browser: Intelligent security built into the browsing session

It is essential to begin with the fact that Island policies are straightforward to configure. By the nature of the Application Boundary concept mentioned above, there is usually little need to focus on the painful granular efforts of traditional data protection approaches. Leveraging such facilities will ensure that organizational data remains within the corporate application footprint, allowing data to move freely when desired across that footprint, but can prevent the spillage of corporate data into undesirable places. ... Island has very flexible logging and audit features. Because the browser is a natural termination point for SSL traffic, Island does not have to leverage complex break-and-inspect mechanics required by countless security tools to gain visibility and control. The result is that Island has unimpeded, very natural visibility over application usage. Most importantly, the ability to have dexterity in audit logging delivers complete privacy for the user at the proper times, anonymized but audited logging at other times, and even deep audit over any application engagement at other times.


Get Ahead of the Curve: Crafting a Roadmap to a Successful Data Governance Strategy!

Crafting a seamless data governance plan is crucial for any organization that wants to move from data anarchy to order. A well-designed data governance plan can help ensure that data is accurate, consistent, and secure. It can also help organizations comply with regulatory requirements and avoid costly data breaches. To create a seamless data governance plan, it is important to start by identifying the key stakeholders and their roles in the data governance process. This includes identifying who will be responsible for data management, who will be responsible for data quality, and who will be responsible for data security. Once the key stakeholders have been identified, it is important to establish clear policies and procedures for data governance. This includes defining data standards, establishing data quality metrics, and creating data security protocols. It is also important to establish a system for monitoring and enforcing these policies and procedures. By following these steps, organizations can create a seamless data governance plan that will help them move from data anarchy to order.


History Never Repeats. But Sometimes It Rhymes.

Imagine Red Hat succeeds in eliminating all vendors it calls “rebuilders” from Enterprise Linux. Congratulations, Red Hat! You’re now king of the hill, and all users who want a “true” Enterprise Linux will be purchasing Red Hat subscriptions! What will this do for the Enterprise Linux ecosystem According to Mike McGrath, Red Hat’s Vice President of Core Platforms, this will allow Red Hat to invest all that extra subscription money into creating new and innovative open source software and employing lots of new open source developers. Maybe. But having been in the industry for a long time, my suspicions are that IBM shareholders might have other uses for that money. More likely, in my opinion, is that users, who value freedom and control over their own computing destiny more than anything else, will swiftly migrate off the RHEL platform. Where will they go? That’s where my crystal ball isn’t so good. Maybe some will go to Debian and derivatives. Some will go to SuSE Enterprise Linux. The short-sighted ones will migrate workloads back to the welcoming arms of Microsoft Windows, or, being more charitable about Microsoft, an Enterprise Linux distribution running on top of Microsoft Azure. 


How to Address AI Data Privacy Concerns

Companies developing AI systems can take several approaches to protecting data privacy. Data scientists need to be educated on data privacy, but company leadership needs to recognize they are not the ultimate experts on privacy. “Companies also can provide their data scientists with tools that have built-in guardrails that enforce compliance,” says Manasi Vartak, founder and CEO of Verta, a company that provides management and operations solutions for data science and machine learning team. “Companies have to deploy a variety of technical strategies to protect data privacy; there is an entire spectrum of privacy preservation technologies out there to address such issues,” says Adnan Masood, PhD, chief AI architect at digital transformation solutions company UST. He points to approaches like tokenization, which replaces sensitive data elements with non-sensitive equivalents. Anonymization and the use of synthetic data are also among the potential privacy preservation strategies. “On the cutting edge, we have techniques like fully homomorphic encryption, which allows computations to be performed on encrypted data without ever needing to decrypt it,” says Masood.


India’s stock market regulator Sebi releases cybersecurity consultation paper

Cybersecurity experts hailed the consultation paper by Sebi as a step in the right direction. "By and large these entities are becoming very fertile targets of continuing cyberattacks and cybersecurity breaches," said Dr. Pavan Duggal, cyber law expert and practicing advocate at the Supreme Court of India, adding that there has been a need felt for quite some time for a consolidated cybersecurity and cyberresilience framework. "Sebi had come up with a cyberresilience framework some years ago, but the intersection of cybersecurity and cyberresilience had not been addressed. It is also an extension of what the existing principles of law are already stating," Duggal said. "Under the new updated IT rules 2023, every regulated entity has to adopt reasonable security practices and procedures to protect third-party data. In Sebi-regulated entities, these could become the parameters of due diligence on cybersecurity," Duggal said, adding that in the absence of a dedicated cybersecurity law and cyberresilience law, the framework assumes more relevance.


Taking the risk out of the semiconductor supply chain

Even before the most recent supply chain challenges, political leaders around the world have been taking a close look at the current semiconductor supply chain model. Semiconductors across the global economy have the potential to shape supply chains for numerous commercial electronics, as well as components essential to critical infrastructures, such as telecommunications and financial services. Perhaps more importantly, the supply of semiconductors has worldwide security implications, affecting national and regional defense and emergency response capabilities. Given its geopolitical impact, many policymakers concluded that the existing semiconductor supply chain model is too risky and are responding accordingly. Some of that risk is being addressed at national and regional levels, such as the U.S. CHIPS Act and the EU Chips Act. However, investments in these initiatives are heavily focused on building new wafer fabrication facilities, or “fabs.” While fabs make up a critical part of the manufacturing process, increased fab production alone cannot better secure the global supply chain.


Are cloud architects biased?

Don’t get me wrong; this does not mean a specific technology stack is incorrect. At issue is that we’ve pulled back from working from the requirements to the solutions, and now things are the other way around. The reasons that many people are “compromised” are easy to define. Everything works. You put up a technology stack to adapt to solve the problem; however, if it’s not the fully optimized solution, it will cost the business millions of dollars over its life cycle, and at some point, it will stop working and will have to be fixed. There is no immediate punishment for picking underoptimized solutions. Therefore, success is declared, and the project leader moves on to other decisions with their bias reinforced by the false perception of success. This dysfunctional process makes things worse and creates so much technical debt. I’m not suggesting that cloud architects are getting money under the table to pick one technology stack over another. I am concerned they have not opened their minds to other options, even significant changes such as leveraging traditional on-premises solutions over cloud-based ones or vice versa.



Quote for the day:

"The litmus test for our success as Leaders is not how many people we are leading, but how many we are transforming into leaders" -- Kayode Fayemi

Daily Tech Digest - July 04, 2023

Banking Tech Forecast: Cloudy, With a Chance of Cyber Risk

The breakneck pace of adoption has also resulted in a shortage of security experts who understand the overlapping, yet unique needs of the two industries. The cybersecurity sector has faced a shortage of skilled security professionals for most of its existence. Cloud solutions help mitigate this, because security can be integrated into the infrastructure and managed in a centralized place, Betz said. "Even then, financial institutions are still expected to conduct due diligence and oversight of third parties. This ability to evaluate security in a complex environment requires a high level of skill, which will continue to be highly sought-after, she said. Hiring and retraining existing staff to meet the volume of needed workers is a challenge too, in addition to the regulatory landscape wanting to force multi-cloud infrastructure for resiliency, Leach said. This means that a financial institution may be required to support multiple cloud service providers that operate differently and have different approaches to security assets. 


Data Mesh: A Pit Stop on the Road to a Data-Centric Culture

The process behind these benefits is also noteworthy. Most importantly, the notion of data decentralization is deceptively simple, and potentially revolutionary. Think of how IT consumerization has upended traditional technology implementation: Where IT specialists once made all the decisions on which tools to buy for business professionals and dictated how all that hardware and software was to be used, those end users now call the shots. They freely buy the devices they want and download the apps they like, then wait for IT to catch up. This provides enormous benefits. With data mesh we’re seeing similar movement toward data democratization. When line-of-business teams and other constituencies within the enterprise gain unprecedented access, and even ownership, of business data that was previously guarded, it accelerates collaboration and enables custom strategies to solve specific business problems. Data access also becomes simpler when interfaces and navigation are not just user-friendly but attuned to the priorities of specific functions, rather than having a more generic or enterprise-wide approach.


5 key mistakes IT leaders make at board meetings

It’s important to avoid speaking technical jargon, but sometimes you’re asked to define a technical term or explain a technology. One approach both Puglisi and I recommend is to answer technical questions with analogies from your industry. We both worked in the construction industry, so, for example, we might help these executives understand Scrum in software development by comparing it to design-build and agile construction project methodologies. ... Sometimes you need a spark to create a sense of urgency, but don’t take this approach too far. I once heard a CISO say, “If you can’t convince the board, then scare them,” which might get a CISO a yes to an investment, but lose credibility over time. CISOs who are natural presenters and storytellers can connect with the board using these skills, but only if given sufficient time to use this approach. If presenting isn’t your best skill, or you only have a few minutes to present, storytelling may confuse directors, says Tony Pietrocola, president and co-founder of AgileBlue. 


Beyond Browsers: The Longterm Future of JavaScript Standards

Developers don’t want to write their code multiple times to run on different serverside runtimes, which they have to do today. It slows down development, increases the maintenance load and may put developers off supporting more platforms like Workers, Snell noted. Library and framework creators in particular are unhappy with that extra work. “They don’t want to go through all that trouble and deal with these different development life cycles on these different runtimes with different schedules and having to maintain these different modules.” That’s a disadvantage for the runtimes and platforms as well as for individual developers wanting to use tools like database drivers that are specifically written for one runtime or another, he pointed out. “We talk to developers who are creating Postgres drivers or MongoDB drivers and they don’t want to rewrite their code to fit our specific API. It would be great if it was an API that worked on all the different platforms.” WinterCG is trying to coordinate what Ehrenberg calls “a version of fetch that makes sense on servers” as well as supporting Web APIs like text encoder and the set timeout APIs in a compatible way across runtimes.


EU judgment sinks Meta’s argument for targeted ads

Article 6(1)(b) of the GDPR establishes that practice could only be justified on condition that if the data is not processed, the contract between the user and the service operator can’t be fulfilled. This contractual necessity for data processing is usually understood rather more narrowly. For example, it enables an online retailer to provide a customer’s address to a courier, which is clearly necessary data processing under the terms of the contract between the store and the customer. Meta had relied on Article 6(1)(b) as its main justification for data processing for targeted advertising by claiming that targeted advertising was part of the service it contractually owes its users – a clause it introduced as part of a change to its terms of service (ToS) made at the stroke of midnight on 25 May 2018, which is the precise second the GDPR first came into force. Max Schrems, founder of Austria-based data protection campaign group NOYB, argued that Meta seemed to have taken the view that it could just “add random elements” to the contract, i.e. its ToS, covering personalised advertising, to avoid offering users a yes or no consent option.


Cloud Equity Group’s Sean Frank Talks AI Mingling with the Cloud

What AI is going to allow cloud to do is really kind of predict those changes that are needed. So instead of reacting to slowdowns because you’re running out of memory, you’re running out of processing power -- it could predict when those things are going to happen based on analyzing historical data and then it can allocate the resources dynamically. For a small organization or a small environment, it may not sound like a big task but as you’re thinking about these larger enterprises that might be running hundreds or thousands of servers that are all running different programs and they interact, if one thing goes down it can affect the rest of the ecosystem. It really is a very important aspect in being able to help make sure that everything is up and can be maintained. From a maintenance standpoint as well, it really tends to a very significant benefit. Right now, maintenance in general, in IT and cloud-based infrastructure, is largely reactive. There’s a problem, the user reports the problem, and someone from the IT department will then investigate the problem.


Is Zero Trust Achievable?

Microsegmentation is often the biggest hurdle businesses will face when implementing Zero Trust and many decide to forego it. Because the architecture is now based on security need, it can be complex to implement, particularly over on-premise private networks which tend to be flat and have high levels of implicit trust. Microsegmentation projects can run on for months and failure rates are high. Forrester’s Best Practices for Zero Trust Microsegmentation found that out of the 14 vendors who attempted to microsegment their private networks, 11 failed and concluded that in order to succeed, senior management buy-in is needed that oversees the removal of implicit trust between identities. However, there are other issues that can see projects flounder. Many organisations have legacy systems in situ, for instance, that were designed to function on a perimeterised network so assume trust. Realistically, the business may have to build the ZTNA around these until they become are retired or replaced by cloud-based SaaS alternatives, suggests the National Cyber Security Centre in its Zero Trust Architecture Principles.


Data Governance: The Oft-Overlooked Pillar of Strong Data Quality

Ungoverned data typically originates because someone in the organization, such as a data analyst, produces data without establishing strong governance policies to control how the data will be shared, secured, and maintained over time. Data producers may also lack an understanding of the underlying data they are working with and the business rules for aggregating or disseminating various KPIs related to the data. The decision to share data without first sorting out these issues is not usually the result of malfeasance. On the contrary, ungoverned data often emerges because someone in the organization creates data that other people need, and in the rush to ensure that they can start using the data, sharing begins before anyone creates a plan in place for governing the data over the long term. It doesn’t help, either, that it’s common for businesses to have IT solutions in place that make it easy to share data, but not to govern it. The typical IT organization implements software that can store and distribute information across an organization, but the IT department has little or no knowledge of how different business units will use the data. 


Why cyberpsychology is such an important part of effective cybersecurity

Cyberpsychologists and enterprise cybersecurity practitioners both stress the need to better understand how people interact with technology to create a stronger cybersecurity posture. They point to statistics showing that most breaches involve some sort of human misstep. Verizon's 2023 Data Breach Investigations Report, for example, found that "74% of all breaches include the human element, with people being involved either via error, privilege misuse, use of stolen credentials or social engineering." As Huffman says, hackers "don't want to go toe-to-toe with your firewall. They don't want to challenge your antivirus, because that's very difficult, not when they can exploit the largest vulnerability on every network on the planet right now -- that's us, people. Cybercriminals are not just hacking computers; they are hacking humans. Because ... unlike computers, we actually respond to propaganda." Psychology gets at why humans do what they do, says Huffman, founder of cybersecurity services firm Handshake Leadership. 


CISA's New 'CyberSentry' Program to Tighten ICS Security

The CyberSentry program is part of the several cybersecurity provisions in the National Defense Authorization Act signed by U.S. President Joe Biden in December 2021. The act provisioned $768 billion in defense spending including the cybersecurity component of various national and federal agencies. The program aims to support CISA's efforts to defend U.S. critical infrastructure networks operators that support national critical functions such as power and water supply, banks and financial institutions and healthcare by monitoring both known and unknown malicious activity affecting IT and OT networks. The CyberSentry program is based on a mutual agreement between CISA and participating critical infrastructure partners. The program is voluntary and is provided at no additional fees or equipment costs to the participating partners. Under the program, CISA harnesses sensitive government information and provides visibility and mitigation of cyber threats targeting critical infrastructure. 



Quote for the day:

"You can only lead others where you yourself are willing to go." -- Lachlan McLean

Daily Tech Digest - July 03, 2023

5 Enterprise Architecture Principles For Digital Transformation

This involves creating the Business Architecture, which includes the Business Domains (BD) and Business Capabilities (BC). To begin, it's important to create a Business Capability Map (BC Map). This map helps assess the organization's current business capabilities and identifies the new BCs needed for the target BC map. The BC Map serves as a foundation for defining strategic areas of focus, such as improving skills, refining products, optimizing services, innovating business models, streamlining processes, setting key performance indicators (KPIs), and estimating implementation costs for Business Capabilities. ... In the realm of digital architecture, where data and application security reign supreme, the inclusion of comprehensive security measures from the very outset is imperative. Such measures need to span the entire architecture, encompassing a multitude of facets including authentication, multi-factor authentication, key management, single sign-on (SSO), authorization, auditing, logging, as well as the encryption of both data in transit and data at rest.


How CISOs can make themselves ready to serve on the board

There’s growing global momentum for public company and corporate boards to also start recruiting directors with relevant cybersecurity expertise, which is part of a broader trend of boards seeking recruits with any kind of technical expertise. A recent study from leadership consulting firm Spencer Stuart reported that about a third of ‘next-gen’ directors aged 50 and younger have technology backgrounds. Chenxi Wang, a longtime cybersecurity expert and venture capitalist is part of that wave. She was recruited to the board for MDU Resources Group, a US-based energy and construction materials firm, back in 2019. She says the company was attracted both by her cybersecurity acumen and her connections with the high-tech industry in general. “As a pretty traditional and longstanding company, they are actually very conscientious about bringing diverse mindsets and backgrounds to the boardroom,” she tells CSO. “And they wanted someone who’s connected with the West Coast high-tech industry to give them that perspective that a Midwest energy company may or may not have exposure to.”


Mobile Cyberattacks Soar, Especially Against Android Users

In terms of the types of mobile malware that's circulating out there, Kaspersky saw fewer mobile malware installers and less ransomware in the past year, but more banking Trojans, it stated in "The Mobile Malware Threat Landscape in 2022" report. "Cybercriminals are still working on improving both malware functionality and spread vectors," according to the report. "Malware is increasingly spreading through legitimate channels, such as official marketplaces and ads in popular apps. This is true for both scam apps and dangerous mobile banking malware." To put all of this into perspective, it should be noted that traditional computing platforms still attract the lion's share of the cybercrime pie. Kaspersky, for example, blocked more than 20 million malicious installers, spyware, and adware attacks on mobile devices over the last four quarters, but blocked more than 20 times that number against more common work platforms, such as Windows. However, the mobile threat vector is not as well protected. "In most cases, mobile devices represent a significant, unaddressed attack surface for enterprises," Zimperium's Keating says.


Cloud security: Sometimes the risks may outweigh the rewards

The UK National Cyber Security Centre set out 14 cloud security principles to help businesses of different sizes balance their needs to configure cloud services securely. Vendor lock-in can be a common issue for businesses. Cloud vendors will give you all the tools to make your life easier, but getting out of their systems if you decide to stop working with them will be really difficult if you rely too much on their infrastructure. Beyond the technological risks, another deciding factor is the general trust toward cloud providers and the “hyperscalers” such as AWS or Google, who can provide public and hybrid cloud services to large enterprise networks. Flexibility and the ability to configure your set-up to your specific needs could be lost on the cloud. If you are running on-premises, you have more flexibility to reconfigure things. Your commercial relationship with your cloud provider will dictate how flexible you can be with your cloud infrastructure, which can prevent you from fixing unsafe issues as they come. How much control and flexibility you want on your data storage should impact your cloud set-up.


The time to implement an internal AI usage policy is now

Due to the rapid adoption of AI, there is still a lack of national and global legislation, governance, and guidance on using it, particularly in a professional environment. And while common sense and a general understanding of IT security are good places to start, this is not sufficient to rely on. It is increasingly important therefore that organisations devote time and resource to developing internal use and misuse policies for employees using AI in the workplace so that information and integrity is protected. These policies are only possible with a commitment to staying informed through ongoing research and continuous knowledge sharing; collaboration between AI experts and cybersecurity professionals at other organisations is vital for a comprehensive and proactive approach to identifying and mitigating AI-related risks. Policies need to be reinforced with good training. This starts with regular sessions and skill development for cyber professionals on the current risks of AI and those that may emerge in the future. It is backed-up by role-appropriate education for employees throughout the organisation.


5 ways to step outside your comfort zone at work, according to business leaders

Simon Langthorne, head of customer relationship management at Virgin Atlantic, says the biggest transformational steps he's taken during his career are when he's stepped outside his comfort zone to embrace new opportunities. "Every time there's been something new, it's been about, 'How do you go out and find exciting things?'" he says. "It's about looking for opportunities that take you outside the space of your expertise and take you into a place where actually you can lead and grow." Langthorne says one of the big challenges that comes from moving outside your comfort zone is thinking about how you'll deal with senior manager-level conversations. "They're not your peers; they're the guys that you really look up to. But you're also trying to guide and steer them. They're excited by things like artificial intelligence, but they'll look to you as the subject matter expert," he says. Embracing new challenges -- such as taking the lead on emerging technology -- will mean you suddenly have new gravitas in an organization, even if it feels uneasy at first.


How new AI tools like ChatGPT can transform human productivity in the enterprise

Data serves as the lifeblood of modern enterprises, yet extracting meaningful insights from vast amounts of data can be a daunting task. Here, AI technologies such as machine learning and natural language processing come into play, enabling the analysis of data at scale, uncovering valuable patterns and providing actionable insights. For example, AI-powered analytics platforms can process customer data to identify trends, preferences and purchasing patterns, allowing businesses to deliver personalized experiences. ... AI has the potential to augment human decision-making by offering real-time, data-driven recommendations. Business leaders can use AI-powered predictive analytics models to forecast market trends, optimize inventory management and enhance supply chain efficiency. ... AI technologies play a vital role in facilitating seamless collaboration and knowledge sharing among employees, transcending geographical boundaries. For instance, AI-powered virtual assistants can schedule meetings, transcribe conversations and facilitate information retrieval, thereby enhancing teamwork and productivity.


8 problematic IT team members — and how to deal with them

When advising companies on digital transformations, Maura Charles, a product and leadership coach, often encounters managers who are unwilling to address toxic employees — and without intervention, the problem grows. “I see bad behavior swept under the rug regularly,” Charles says. “In technology, this often happens because companies hire solely on the basis of technical skills and experience and don’t consider the importance of communication, emotional intelligence, and growth mindset. These so-called soft skills are what make technology initiatives and products successful, though.” Her advice? Seek some outside perspective from another capable manager or other trusted source. “Seeing the impact of all of these types of behaviors on team productivity and morale, I often find that team or leadership retrospectives can help shine a light on challenges,” she says. “When leaders ignore these issues, they tend to fester, and the teams and outcomes suffer. By tackling the issues, you may avoid losing talented employees by showing that the people and their work environment matter.”


Will Your Business Soar With an Industry Cloud?

Campbell advises enterprises to begin their cloud journey by carefully studying their business strategy. “Understand where the most critical differentiation is necessary,” he suggests. “Industry clouds offer an opportunity to build differentiation around the most important capabilities.” An industry cloud can also provide a way to achieve faster time to value in less critical sectors to differentiate capability areas while reducing the amount of focus required. “It’s important to remember that the evaluation is iterative as the strategy and the market evolves, so continuous monitoring is important,” Campbell warns. Singh recommends looking for an industry cloud that’s tailored to a specific field from a provider with deep area expertise. “Find a provider that not only offers a cutting-edge functional footprint in your industry, but also supports components and best practices from other industries to help create true differentiation,” he suggests. Besides enterprise-wide support, an industry cloud can also be used to address a specific need. 


Why companies should consider having a chief trust officer

Should an internal review detail the absence of trust in engagements (internal or external), then perhaps boldness is needed to affect change from within. CISOs and CSOs need to ask the questions posited by Stewart and Harkins, both of whom were at the vanguard of the CTO evolution. Hitch up those pants and either work to expand your own role as necessary or push for the creation of the role for someone else. Regardless of the path taken internally, the CTO's role must be sufficiently broad in scope to encompass the necessary visibility across the entire entity's landscape. In a recent MIT Technology Review piece, Elena Kvochko, chief trust officer at SAP, highlighted that "trust is a clear competitive differentiator -- having a recognized awareness that this is an important function and this is an important direction for the company -- it was critical for our success." In closing, and as noted by Stewart, the role must include demonstrable responsibilities, accountabilities and above all the necessary authorities. 



Quote for the day:

"The greatest good you can do for another is not just share your riches, but reveal to them their own." -- Benjamin Disraeli

Daily Tech Digest - July 02, 2023

OpenAI, others pushing false narratives about LLMs, says Databricks CTO

“There are definitely the larger providers, like OpenAI, Google, and so on; they have this narrative – and they’re talking in a lot of places about how – first of all, this stuff is super dangerous, not in the sense of a disruptive technology, but even in the sense of ‘it might be evil and whatever’,” Zaharia told ITPro during an interview at Databricks AI and Data Summit 2023. “It’s very sci-fi.” “OpenAI – that’s exactly the narrative they’re pushing – but others as well. “Anytime someone talks about AI alignment or whatever, it’s often from this angle: Watch out, it might be evil. They’re also saying how it’s a huge amount of work to train [models]: It’s super expensive – don’t even try it. “I’m not sure either of those things are true.” Zaharia cited MosaicML – the startup Databricks recently acquired for $1.3 billion – as having trained a large language model (LLM) with 30 million parameters that’s competitive with GPT-3, and “probably cost like ten to 20 times less” to train.


Ransomware: recovering from the inevitable

There’s no doubt that businesses’ cybersecurity teams are under an immense amount of pressure in the battle against ransomware but they can only go so far alone. There must be an awareness that it simply can’t be stopped at the source, and that defending against ransomware takes a combination of people, processes and technology. The digital world can appear complex – especially in the case of large enterprise structures – so it can be helpful to stress that the digital world and the real world are not that different. Digital protections such as patching systems, multi-factor authentication, data protection and the risk of the insider threats all have real world counterparts: open windows that need to be locked at night, double locking your front door, locking away vital items in a safe, and opportunistic break ins through unlocked windows or doors. However, whilst using a combination of people, processes and technology to minimise attacks is key, some will inevitably slip through the cracks, which is where recovery comes into play.


AI Foundation launches AI.XYZ to give people their own AI assistants

The platform enables users to design their own AI assistants that can safely support them in both personal and professional settings. Each AI is unique to its creator and can assist with tasks such as note-taking, email writing, brainstorming, and offering personalized advice and perspectives. Unlike generic AI assistants from companies like Amazon, Google, Apple, or ChatGPT, each AI assistant designed on AI.XYZ belongs exclusively to its creator, knows the person’s values and goals, and provides more personalized help. The company sees a significant opportunity for workplaces and enterprises to provide each of their employees with their own AIs. ... AI.XYZ is available in public beta and can be accessed on the web with an invitation code. Creators can interact with their AIs through text, voice, and video. A free subscription to AI.XYZ allows users to get started creating their own AI, while a premium subscription for $20 per month allows additional capabilities and customization options. The AI Foundation has collaborated with top research institutions like the Technical University of Munich to create “sustainable AI” for everyone.


TDD and the Impact on Security

Outside-In Test-Driven Development (TDD) is an approach to software development that emphasizes starting the development process first by creating high-level acceptance tests or end-to-end tests that demonstrate the desired behaviour of the system from his point of view to define users or external interfaces. It is also commonly referred to as behaviour-directed development (BDD). With Outside-In TDD, the development process begins with writing a failed acceptance test that describes the desired behaviour of the system. This test is usually written from a user's perspective or a high-level component interacting with the system. The test is expected to initially fail as the system does not have the required functionality. Once the first acceptance test has been performed, the next step is to write a failing unit test for the smallest possible unit of code that will pass the acceptance test. This unit test defines the desired behaviour of a specific module or component within the system. The unit test fails because the corresponding code still needs to be implemented.


Wasm: 5 things developers should be tracking

One of Wasm’s biggest draws is its cross-platform portability. Wasm is a neutral binary format that can be shoved in a container and run anywhere. This is key in our increasingly polyglot hardware and software world. Developers hate compiling to multiple different formats because every additional architecture (x86, Arm, Z, Power, etc.) adds to your test matrix, and exploding test matrices is a very expensive problem. QE is the bottleneck for many development teams. With Wasm, you have the potential to write applications, compile them once, test them once, and deploy them on any number of hardware and software platforms that span the hybrid cloud, from the edge to your data center to public clouds. A developer on a Mac could compile a program into a Wasm binary, test it locally, and then confidently push it out to all of the different machines that it’s going to be deployed on. All of these machines will already have a Wasm runtime installed on them, one that is battle tested for that particular platform, thereby making the Wasm binaries extremely portable, much like Java.


Getting Started with Data Literacy: Two Tips for Success

How should an enterprise get started? Langer says he “came to the inescapable conclusion that data literacy must start with leaders. Data literacy isn't just for the rank-and-file.” As a litmus test when he starts talking to organizations, he asks about their leader's commitment to data literacy. “I ask them, ‘Is your organization willing to send your leaders to training -- managers, executives, the C-suite, all of them?’ If not, which is often the case, that probably tells you everything that you need to know, because data literacy is very much a cultural transformation. If your leaders aren't all in, then there's almost no point in getting started, to be frank. If employees see their managers not exhibiting a data literacy mindset and data literacy behaviors, they will revert to business as usual.” Langer admits to receiving pushback; executives wonder if data literacy is needed because newer technology such as no-code/low-code or generative AI already make it easier to gain insights.


How Data Observability Helps Shift Left Your Data Reliability

When you consider data observability, the term “shift left” refers to a proactive strategy that involves incorporating observability practices at the early stages of the data lifecycle. This concept draws inspiration from software development methodologies and emphasizes the importance of addressing potential issues and ensuring high quality right from the start. When applied to data observability, shifting left entails integrating observability practices and tools into the data pipeline and infrastructure right from the outset. This approach avoids treating observability as an afterthought or implementing it only in later stages. The primary goal is to identify and resolve data quality, integrity, and performance issues as early as possible, thereby minimizing the likelihood of problems propagating downstream. ... Taking a proactive approach to address data incidents early on enables organizations to mitigate the potential impact and cost associated with data issues. 


Architecting Real-Time Analytics for Speed and Scale

Apache Druid has emerged as the preferred database for real-time analytics applications due to its high performance and ability to handle streaming data. With its support for true stream ingestion and efficient processing of large data volumes in sub-second timeframes, even under heavy loads, Apache Druid excels in delivering fast insights on fresh data. Its seamless integration with Apache Kafka and Amazon Kinesis further solidifies its position as the go-to choice for real-time analytics. When choosing an analytics database for streaming data, considerations such as scale, latency, and data quality are crucial. The ability to handle the full-scale of event streaming, ingest and correlate multiple Kafka topics or Kinesis shards, support event-based ingestion, and ensure data integrity during disruptions are key requirements. Apache Druid not only meets these criteria but goes above and beyond to deliver on these expectations and provide additional capabilities.


Why business leaders must tackle ethical considerations as AI becomes ubiquitous

When it comes to ethical AI, there is a true balancing act. The industry as a whole has differing views on what is deemed ethical, making it unclear who should make the executive decision on whose ethics are the right ethics. However, perhaps the question to ask is whether companies are being transparent about how they are building these systems. This is the main issue we are facing today. Ultimately, although supporting regulation and legislation may seem like a good solution, even the best efforts can be thwarted in the face of fast-paced technological advancements. The future is uncertain, and it is very possible that in the next few years, a loophole or an ethical quagmire may surface that we could not foresee. This is why transparency and competition are the ultimate solutions to ethical AI today. Currently, companies compete to provide a comprehensive and seamless user experience. For example, people may choose Instagram over Facebook, Google over Bing, or Slack over Microsoft Teams based on the quality of experience. 


ChatGPT, compliance, and the impending wave of AI-fuelled content

Despite its convincing rhetoric, ChatGPT is, at times, deeply flawed. Quite simply, its statements can’t always be trusted. This is a reasonably devastating indictment for a tool which invites such vehement scrutiny, and has been acknowledged by OpenAI, who admit that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” ChatGPT has a vast wealth of knowledge because it was trained on all manner of web content, from books and academic articles to blog posts and Wikipedia entries. Alas, the internet is not a domain renowned for its factual integrity. Furthermore, ChatGPT doesn’t actually connect to the internet to track down the information it needs to respond. Instead, it simply repeats patterns it has seen in its training data. In other words, ChatGPT arrives at an answer by making a series of guesses, which is part of the reason it can argue wrong answers as if they were completely true, and give different (incorrect) answers to the same questions.



Quote for the day:

"The mediocre leader tells The good leader explains The superior leader demonstrates The great leader inspires." -- Buchholz and Roth

Daily Tech Digest - July 01, 2023

CERT-In cyber security norms bar use of Anydesk, Teamviewer by govt dept

Cyber security watchdog CERTin has barred the use of remote desktop softwares like Anydesk and Teamviewer in the government department under new security guidelines released on Friday. The guidelines prescribe government departments use virtual private networks (VPN) for accessing network resources from remote locations and enable multi-factor authentication (MFA) for VPN accounts. "Ensure to block access to any remote desktop applications, such as Anydesk, Teamviewer, Ammyy admin etc," Guidelines on Information Security Practices for Government Entities said. CERT-In (Indian Computer Emergency Response Team ) said the purpose of these guidelines is to establish a prioritised baseline for cyber security measures and controls within government organisations and their associated organisations. Minister of State for Electronics and IT Rajeev Chandrasekhar in an official statement said the government has taken several initiatives to ensure an open, safe and trusted and accountable digital space.


Navigating Product Owner Accountability in Scrum: Debunking Myths and Overcoming Anti-Patterns

In a misguided attempt to ‘help’ Product Owners with their important responsibilities, some organizations establish two Product Owners for a single Product. However, while this may seem, at first, to be helpful, this actually causes a lot of problems for both Product Owners involved. When multiple Product Owners exist, conflicting ideas and visions may arise, diluting the product's direction and impeding progress. ... Instead, the Product Owner can delegate tasks such as creating Product Backlog items, maintaining the roadmap, or gathering metrics to Developers on the Scrum team. However, it is important to note that while the Product Owner may delegate as needed, the Product Owner ultimately remains accountable for items in the Product Backlog as well as the product forecast or roadmap, thus ensuring that there is a single, unifying vision and goal for the product and that the Product Backlog is in alignment with that vision. If the Product Owner is delegating the creation of Product Backlog items to Developers, what does that mean? 


Cisco firewall upgrade boosts visibility into encrypted traffic

“What our competitors are saying is ‘just decrypt everything.’ But we know in the real world, customers refrain from doing that due to data privacy concerns and to meet legal/compliance requirements. Furthermore, decrypting and re-encrypting data requires technical prowess not everyone has, increases the attack surface, and also causes severe performance challenges,” Miles said. EVE works by extracting two primary types of data features from the initial packet of a network connection, according to a blog written by Blake Anderson, a software engineer in Cisco’s advanced security research group. First, information about the client is represented by the Network Protocol Fingerprint (NPF), which extracts sequences of bytes from the initial packet and is indicative of the process, library, and/or operating system that initiated the connection. Second, it extracts information about the server such as its IP address, port, and domain name (for example a TLS server_name or HTTP Host). 


Scrum vs. Kanban vs. Lean: Choosing Your Path in Agile Development

While Scrum is commonly associated with software development teams, its principles and lessons have broad applicability across various domains of teamwork. This versatility is one of the key factors contributing to the widespread popularity of Scrum. Scrum is founded upon the concept of time-boxed iterations called sprints, which are designed to enhance team efficiency within cyclical development cycles. ... Kanban is well-suited for organizations seeking to embrace the benefits of agility while minimizing drastic workflow changes. It is particularly suitable for projects where priorities frequently shift, and ad hoc tasks can arise anytime. Kanban is a flexible methodology that can be applied to various domains and teams beyond software development. ... Lean methodology strongly emphasizes market validation and creating successful products that provide value to users. It is particularly well-suited for new product development teams or startups operating in emerging niches where a finished product may not yet exist, and resources are limited.


3 Ways to Build a More Skilled Cybersecurity Workforce

In addition to insights around highly sought-after skill sets and job titles, OECD's report also reveals that demand for cybersecurity professionals has spread beyond the confines of major urban centers. It calls for a more decentralized workforce to meet demand in underserved areas. ... If companies are to close the skills gap and meet the current demand for cybersecurity workers, they will need to broaden their horizons to account for more nontraditional cybersecurity career paths. In doing so, they will enhance the industry with a broader range of unique experiences and life skills. Recruiting more diverse candidates also allows companies to approach security challenges from different angles and identify solutions that may not have been considered otherwise. When a workforce is as diverse as the cybersecurity threats an organization faces, it can pull from a broader range of professional and personal experiences to more effectively and inclusively protect themselves and their end users.


AI's Teachable Moment: How ChatGPT Is Transforming the Classroom

"Teachers could say, 'Hey my students are really interested in TikTok,' then feed that to the AI," says Liu. "An AI could come up with three analogies related to TikTok that connect students to their needs and interests." Liu believes we absolutely need to acknowledge the immediate threats surrounding AI and its initial impact on teachers, particularly around skills assessments and cheating. One approach he takes is to speak openly with students and acknowledge that AI is the new thing and that we're all learning about it – what it can do, where it might lead. The more open conversations educators have with students, he says, the better. In the near term, students are going to cheat. That's impossible to avoid. YouTube and TikTok are bulging at the seams with tricks to help students avoid plagiarism trackers. In the medium term, Liu believes, we need to reevaluate what it means to grade students. Does that mean allowing students to use AI in assessments? Or changing how to teach topics? Liu isn't 100% sure.


Top 5 Benefits Of Blockchain Technology

Transparency within the Blockchain ecosystem refers to the open visibility of transactions, enabling all participants to validate and verify the recorded data. Unlike traditional systems that rely on centralized authorities, Blockchain operates on a decentralized network, where each transaction is recorded on a public ledger known as the Blockchain. ... Immutability is a cornerstone of Blockchain technology. It guarantees that once a transaction is recorded on the Blockchain, it becomes virtually impossible to alter or tamper with the data. This is achieved through a combination of cryptographic techniques and consensus mechanisms. Blockchain achieves data immutability by using cryptographic hashing. Each transaction is assigned a unique cryptographic hash, which is essentially a digital fingerprint. This hash is created by applying complex mathematical algorithms to the transaction data, resulting in a fixed-length string of characters. Furthermore, Blockchain relies on consensus mechanisms such as Proof of Work (PoW) or Proof of Stake (PoS) to validate and verify transactions.


Technical Debt tracking supports projects to “do it right”

For decades, there have been logs of outstanding bugs found in testing but not corrected before the project is implemented. The term technical debt adds the concept that there are consequences to those decisions, and that there are strong reasons to prioritize the follow-up to fix things and clear that list. Most of us are aware of workarounds that were left in place permanently and eventually cost too much. We may have seen a system with poor performance that slowed the work of key workers and/or was missing functionality that impacted the customer experience. All of these are important reasons that technical debt should be cleared up. There are other reasons too. Generally, people do not purposefully create poor designs or code with bugs. ... One of the interesting concepts that has been offered by Martin Fowler is the Technical Debt Quadrant that talks about the prudent but inadvertent technical debt that is created as we learn during a project and realize how the project should have been done.


Successful digital transformation requires simplistic thinking

While organizations are aggressively pursuing transformation goals, Chaudhry warned that antiquated mindsets and a range of internal factors can seriously inhibit innovation and prevent businesses from achieving their goals. Most notable among these is a complacent culture among some IT leaders who are stuck in a loop of traditional, outdated practices. “IT plays the most important role in driving transformation. You play the most important role, but you also need to act fast to drive change,” he said. “You can’t sit back and say ‘this is how things have been done for the last 30 years, so let’s keep doing so’.” ... Inertia, as he puts it, is a powerful inhibitor that locks IT leaders and organizations into an outdated mindset which prevents them from embracing change. “Inertia is powerful, and it holds you back because we are comfortable with what we’ve been doing for the last 10, 20 years or so,” he said. Research has often identified inertia as a common inhibitor in digital transformation, whereby teams are reluctant - or unwilling - to accept change.


Strategies to drive the Data Mesh cultural transformation

It’s important to have consistent and clear communication to ensure that everyone understands the reasons and the effects of change. Leaders must communicate the vision and benefits of Data Mesh. They also need to guide on how the new ways of working are going to be adopted through well-defined structures, roles and responsibilities for the new data product teams. To ensure data product ownership and accountability, defining clear KPIs and metrics for each data product team to measure success and track progress is critical. ... Rather than trying to adopt Data Mesh all at once, organizations can start with small pilot projects and gradually expand. This approach can help understand how processes defined in vitro work in real life. It also comes with lessons learned which help followers avoid the initial mistakes. ... This ensures that everyone in the organizationunderstands the new concepts and ways of working. It could include training sessions and coaching on Data Mesh, product thinking, design, user research, agile methodologies, cross-functional team collaboration, and data product ownership.



Quote for the day:

"Leaders need to strike a balance between action and patience." -- Doug Smith