Daily Tech Digest - February 28, 2023

Does the Future of Work include Network as a Service (NaaS)?

NaaS enables companies to implement a network infrastructure that will evolve with time, providing the flexibility to adapt to business needs as time evolves. With NaaS, companies can focus on business outcomes and service level objectives for their network and the accessibility required for their community of workers, partners, and customers. NaaS eliminates organizations having to worry about keeping up with the pace of technology change by relying on the strength and expertise of their implementation partner. NaaS eliminates large upfront capital expenditure investments that often go into new network infrastructure design, planning, and implementation with a monthly subscription-based or flexible consumption model, alleviating the financial impact on rebuilding a new workplace environment. NaaS enables more flexibility by not tying the organization down to specific hardware or capital investments that may eventually become obsolete.


How Technical Debt Hampers Modernization Efforts for Organizations

“When you develop an application, you take certain shortcuts for which you're going to have to pay the price back later on,” explains Olivier Gaudin, cofounder and CEO of SonarSource, which develops open-source software for continuous code quality and security. “You accept that your code is not perfect. You know that it will have a certain cost when you come back to it later. It will be a bit more difficult to read, to maintain or to change.” ... Experts note the patience and long-term strategy required to overcome technical debt. “It’s a matter of focusing on longer-term strategy over short-term financial goals,” Orlandini says. “Unfortunately for Southwest, the issues were well-known. However, the business as a whole did not have the will or motivation to invest in fixing it until it was too late. They are an extreme example but serve as a very valid case in point of what can happen if you do not understand the issues and the ultimate repercussions of not investing to avoid a meltdown, in whatever form that would take for each organization.”


Just how big is this new generative AI? Think internet-level disruption

While resources and information availability increased by an unprecedented degree, so too did misinformation, scams, and criminal activity. One of the biggest problems with ChatGPT is that it presents completely wrong information as eloquently and confidently as it presents accurate information. Unless requested, it doesn't provide sources or cite where that information came from. Because it aggregates a tremendous amount of free-form information, it's often impossible to trace how it comes by its knowledge and assertions. This makes it ripe for corruption and gaming. At some point, AI designers will need to open their systems to the broader internet. When they do, oh boy, it's going to be rough. Today, there are entire industries dedicated to manipulating Google search results. I'm often required by clients to put my articles through software applications that weigh each word and phrase against how much Google oomph it produces, and then I'm asked to change what I write to appeal more to the Google algorithms.


Is blockchain really secure? Here are four pressing cyber threats you must consider

Blockchains use consensus protocols to reach agreement among participants when adding a new block. Since there is no central authority, consensus protocol vulnerabilities threaten to control a blockchain network and dictate its consensus decisions from various attack vectors, such as the majority (51%) and selfish mining attacks. ... The second threat is related to the exposure of sensitive and private data. Blockchains are transparent by design, and participants may share data that attackers can use to infer confidential or sensitive information. As a result, organizations must carefully evaluate their blockchain usage to ensure that only permitted data is shared without exposing any private or sensitive information. ... Attackers may compromise private keys to control participants’ accounts and associated assets by using classical information technology methods, such as phishing and dictionary attacks, or by exploiting vulnerabilities in blockchain clients’ software.


Behaviors To Avoid When Practicing Pair Programming

Despite its popularity, pair programming seems to be a methodology that is not wildly adopted by the industry. When it is, it might vary on what "pair" and "programming" means given a specific context. Sometimes pair programming is used in specific moments throughout the day of practitioners, as reported by Lauren Peate on the podcast Software Engineering Unlocked hosted by Michaela Greiler to fulfill specific tasks. But, in the XP, pair programming is the default approach to developing all the aspects of the software. Due to the variation and interpretation of what pair programming is, companies that adopt it might face some miss conceptions of how to practice it. Often, this is the root cause of having a poor experience while pairing.Lack of soft (social) skills ... The driver and navigator is the style that requires the pair to focus on a single problem at once. Therefore, the navigator is the one that should give support and question the driver's decisions to keep both in sync. When it does not happen, the collaboration session might suffer from a lack of interaction between the pair. 


When it comes to network innovation, we must protect the data ‘pipes’

We must conclude that any encrypted information collected by foreign intelligence services will eventually be cracked through sufficient compute power and time. This is one reason why super computers are part of the race for information dominance. At the level of supercomputers, the amount of compute is truly calculated in cost to build and cost to operate. If you do not have access to cutting edge chips, just increase the number of compute chips, central processing unit or graphics processing unit, or some other compute unit like an AI accelerator. It will cost more to make and cost more electricity to operate, but the amount of compute will be available to the government or corporation that invested in the system. Without a true “zero trust” scheme, any compromise of any node on any network becomes a pivot point for further attacks. The problem with “zero trust” is that to be effective, you need a mature network model that can be secured, not a “growing, organic network” that is adapting rapidly to meet the needs of the user.


Unstructured data and the storage it needs

As we’ve seen, unstructured data is more or less defined by the fact it is not created by use of a database. It may be the case that more structure is applied to unstructured data later in its life, but then it becomes something else. ... It’s quite possible to build adequately performing file and object storage on-site using spinning disk. At the capacities needed, HDD is often the most economic option.But advances in flash manufacturing have led to high-capacity solid state storage becoming available, and storage array makers have started to use it in file and object storage-capable hardware. This is QLC – quad-level cell – flash. This packs in four levels of binary switches to flash cells to provide higher storage density and so lower cost per GB than any other flash commercially usable currently. The trade-offs that come with QLC, however, are that flash lifetime can be compromised, so it’s better suited to large-capacity, less frequently accessed data. But the speed of flash is particularly well-suited to unstructured use cases, such as in analytics where rapid processing and therefore I/O is needed


The Cybersecurity Hype Cycle of ChatGPT and Synthetic Media

Historically, spearphishing messages have been partially or entirely crafted by people. However, synthetic chat makes it possible to automate this process – and highly advanced synthetic chat, like ChatGPT, makes these messages seem just as, or more convincing, than a human-written message. It also opens the door for automated, interactive malicious communications. With this in mind, threat actors can quickly and cheaply massify high-cost and highly effective approaches like spearphishing. These capabilities could be used to support cybercrime, nation-state operations and more. Advances like ChatGPT may also have a meaningful impact on information operations, which have come to the forefront due to foreign influence in recent US presidential elections. Technologies such as ChatGPT can generate lengthy, realistic content supporting divisive narratives, which could help scale up information operations.


How to de-risk your digital ecosystem

In short, in any de-risking framework, one must assume that the largest source of cyberthreats comes not from someone breaking in, but rather from a door left open for an uninvited guest. Organizations must adapt their mindset, their processes, and their resources accordingly. ... In many organizations, the responsibility for closing risk gaps falls to several leaders, but not to a single point of authority. The failure is understandable as digital ecosystems touch multiple dimensions of an enterprise. But then responsibility for the total risk environment and de-risking is shared — though not necessarily met. A lack of accountability results in a lack of power to act and set de-risking as a priority within the organization. ... Without understanding the context of the business, understanding and remediating risk is difficult to do effectively. For example, an outside vendor can be a potential source of risk but also plays a critical and central role in the business. Resolving and mitigating the issue may require special handling and attention.


Closing the Cybersecurity Talent Gap

Cybersecurity is often viewed as just another technical talent field, yet candidates are expected to possess a wide range of rapidly evolving knowledge and skills. When filling staffing gaps, leaders should examine the skill sets that are missing from their current team, such as creative problem solving, stakeholder communications, buy-in development, and change enablement. “Look for candidates who will help balance out existing team skills as opposed to individuals who match a specific technical qualification,” Glair says. Before hiring can begin, it's necessary to attract suitable candidates. Initial search steps should include website updates and social media posts, Glair says. He also suggests creating an internal “cybersecurity academy” that will build talent from within the organization. “This should include the technical, process, communications, and leadership skills needed to address today’s cybersecurity challenges,” Glair notes. Burnet recommends sponsoring a “sourcing jam.” “That means getting recruiters and/or hiring managers in a room together ... to trawl through their networks and get them to personally reach out.”



Quote for the day:

"Leaders are the ones who keep faith with the past, keep step with the present, and keep the promise to posterity." -- Harold J. Seymour

Daily Tech Digest - February 27, 2023

Embrace and extend Excel for AI data prep

Google has come out with a Chrome extension called GPT for Sheets, which allows users to manipulate data with conversational language; Microsoft says it will integrate ChatGPT into all of its products, with Bing first. Microsoft recently invested $10 billion in OpenAI, the creators of ChatGPT. But as exciting (and sometimes disappointing) as ChatGPT applications may be, there’s a much more mundane—and promising—approach to machine learning that’s already available. ... This is the technical process of converting data from one format, standard, or structure to another, without changing the content of the data sets, in order to prepare it for consumption by a machine learning model. Data prep is the equivalent of janitorial work, albeit incredibly important work. Transformation increases the efficiency of business and analytic processes, and it enables businesses to make better data-driven decisions. But it’s difficult and time-consuming unless the user is familiar with Python or the popular query language SQL.


Digital forensics and incident response: The most common DFIR incidents

SOCs already make use of automation as much as possible, as they need to deal with telemetry, but automation for digital forensics is different, as it mostly needs data processing by orchestrating, performing and monitoring forensic workflows. Half of DFIR professionals indicate that investments in automation would be greatly valuable for a range of DFIR functions, as workflows still rely too much upon the manual execution of many repetitive tasks. More than 20% of the survey respondents indicated automation would be mostly valuable for the remote acquisition of target endpoints, the triage of target endpoints, and processing of digital evidence, as well as documenting, summarizing and reporting on incidents. ... A field under such rapid evolution needs informed and decisive leadership to set strategies and direct resources in an efficient way. Leaders influence the way DFIR professionals can efficiently access data sources they need, which is often difficult, as more than a third of the survey respondents indicated.


DDoS Attacks Becoming More Potent, Shorter in Duration

Microsoft says TCP reflected amplification attacks are becoming more prevalent and powerful, and more diverse types of reflectors and attack vectors are typically exploiting "improper TCK stack implementation in middleboxes, such as firewalls and deep packet inspection devices." In reflection attacks, attackers spoof the IP address of the target to send a request to a reflector, such as an open server or middlebox, which responds to the target, such as a virtual machine. The latest TCP reflected amplification attacks can reach "infinite amplification" in some cases. In April 2022, a reflected amplified SYN+ACK attack on an Azure resource in Asia reached 30 million packets per second and lasted 15 seconds. "Attack throughput was not very high, however there were 900 reflectors involved, each with retransmissions, resulting in high pps rate that can bring down the host and other network infrastructure," the report says.


How the Economic Downturn Has Affected Security Funding, M&A

"The first thing that happens when you go into a down economic cycle is: Everybody goes on defense," Ackerman says. "They rationalize the platform, make sure it's stable and right-size for the market. Once that foundation is established, then they go on offense. I think you're going to see an acceleration of M&A activity by the big guys as they get through this consolidation and rationalization process." DeWalt expects industrial control systems and OT security to get lots of attention from the investment community in 2023 given the technology's lack of penetration and volume of attacks against industrial, non-IT networks. Network and infrastructure security had the fifth-highest level of M&A and financing activity in 2022, including a $125 million Series C funding round for critical infrastructure firm Fortress. DeWalt says the Russia-Ukraine war has led to increased attention on data management as data wipers, data poisoning and the poisoning of AI algorithms become ways to foment misinformation and disinformation.


Yes, Virginia, ChatGPT Can Be Used to Write Phishing Emails

Script kiddies in particular have been asking if ChatGPT might help them build better malware for free. Results have been extremely mixed. "Right now, I think it's a novelty," says John Kindervag, creator of zero trust and senior vice president of cybersecurity strategy at ON2IT Group. But as AI gets better, he says, "probably it will allow the attackers to craft more sophisticated attacks, and it will toast everybody who is not paying attention." So far, at least, the fervor over AI chatbots being used to build a better cybercrime mousetrap is claptrap, says security researcher Marcus Hutchins, aka MalwareTech. ... Criminals needn't bother to use AI chatbots, which are trained on publicly available code. Instead, they can go to the source. "If someone with zero coding ability wants malware, there are thousands of ready-to-go examples available on Google" and GitHub, Hutchins says. Another rising concern is that criminals will use AI chatbots to craft better phishing email lures, especially outside their native language.


The Evolution of APIs: From RESTful to Event-Driven

Synchronous microservice limitations can be overcome through asynchronous interaction, event-driven architecture, and event-enabling traditional microservices. Taking advantage of the constant flow of business and technical events by acting on them promptly. As awareness of the importance of events and event-driven architecture (EDA) grows, architects and developers are exploring ways to integrate events into microservices. However, successful adoption of EDA also requires a change in mindset and approach from business stakeholders, product owners, and architects. This shift involves moving from a data-centric approach to one that uses events to drive business decisions and logic. Full event-native adoption is necessary to fully leverage the benefits of events throughout the various stages of the business. Modern APIs are predominantly based on microservices, but events and event-driven architecture (EDA) are becoming increasingly important. The future of APIs lies in combining the strengths of APIs and EDA to create Event-Driven-APIs.


Scotland launches data strategy for health and social care

Carol Sinclair, chair of the Scottish government’s data board for health and social care, said in the strategy’s foreword that the aim is to “empower citizens and staff” through ensuring data supports the delivery of health and social care services. “Public trust and the ethical use of data for public good is central to this strategy,” she said. “We are working alongside colleagues across government to ensure the principles of Open Government are followed as we define and publish key, ethically sound and publicly trusted principles to support the unlocking of the social and economic value associated with the use of public sector personal data in the service of the people of Scotland.” For health and social care staff, the strategy aims to improve discoverability, accessibility, interoperability and reusability, making it easier to access data across organisations. The Scottish government is already working on the replacement of the Community Health Index (CHI) system, a platform which has been in place since the 1970s. 


How to Have More Effective Conversations With Business Stakeholders About Software Architecture

One big barrier to effective conversations about technical decisions that need business input is that development teams and business stakeholders speak different languages. To be more precise, when techies talk about technology, business people tune out. What they need to do is frame discussions in terms of how technical choices will affect business outcomes - something business stakeholders do care about. ... The metaphor helps, to a point, but an even better approach is to describe how addressing a technical issue will enable the organization to achieve a business outcome that they could not otherwise do. Or how not addressing a technical issue will impair the business outcomes that the organization can achieve. This conversation works both ways. When there is a major shift in business priorities, such as the organization deciding to exit a specific market, or needing to respond to regulatory mandates, describing the shift in terms of business outcomes helps everyone understand the impact.


6 precautions for CIOs to consider about ChatGPT

“The success of ChatGPT in a consumer capacity is clear. And since its language model is effectively trained on all the text on the web, the output is so fluent that it can be challenging, if not impossible, to decipher whether the author is human or a bot. But in a higher-stakes enterprise context, the fluency of the response creates a false sense of trust for the user: It looks right. It sounds right. But if you run the numbers, it might be wrong. When you’re trying to make your friends laugh, accuracy doesn’t matter at all. But when trying to make a decision that could impact lives and livelihoods, that unreliability can have catastrophic consequences. In a sense, ChatGPT is similar to Google Translate. While Google Translate is great for travel and other casual use cases, an enterprise won’t trust it to faithfully translate their marketing materials for new geographies or their contracts for international agreements. The stakes are just too high to gamble on a statistical model. Successful applications will require organizations to train and fine-tune a model like ChatGPT on proprietary enterprise information to help it interpret and produce the “language” used within that organization.


'New Class of Bugs' in Apple Devices Opens the Door to Complete Takeover

The findings are another puncture wound in the perception that Apple devices are somehow inherently more secure than PCs or Android devices. "Since the first version of iOS on the original iPhone," Emmitt explained, "Apple has enforced careful restrictions on the software that can run on their mobile devices." The devices do this with code signing. Functioning somewhat like a bouncer at a club, iPhone only allows an application to run if it has been cryptographically signed by a trusted developer. If any entity — a developer, hacker, etc. — wishes to run code on the machine, but they're not "on the list," they'll be shut out. And "as macOS has continually adopted more features of iOS," Emmitt noted, "it has also come to enforce code signing more strictly." As a result of its strict policies, Apple has earned a reputation in some corners for being particularly cyber secure. Yet that extra stringency can only extend so far. "I think that there is a misconception when it comes to Apple devices," says Mike Burch, director of application security for Security Journey. 



Quote for the day:

"To command is to serve : nothing more and nothing less." -- Andre Marlaux

Daily Tech Digest - February 26, 2023

Skills-based hiring continues to rise as degree requirements fade

Hard skills, such as cybersecurity and software development, are still in peak demand, but organizations are finding soft skills can be just as importanr, according to Jamie Kohn, research director in the Gartner Research’s human resources practice. Soft skills, which are often innate, include adaptability, leadership, communications, creativity, problem solving or critical thinking, good interpersonal skills and the ability to collaborate with others. “Also, people don’t learn all their [hard] skills at college,” Kohn said. “They haven’t for some time, but there’s definitely a surge in self-taught skills or taking online courses. You may have a history major who’s a great programmer. That’s not at all unusual anymore. Companies that don’t consider that are missing out by requiring specific degrees.” ... Much of the recent shift to skills-based hiring is due to the dearth of tech talent created by the Great Resignation and a growin number of digital transformation projects. While the US unemployment rate hovers around 3.5%, in technology fields, it’s less than half that (1.5%).


Do digital IDs work? The nation with one card for everything

No system is foolproof: a potential security flaw uncovered in 2017 required the temporary suspension of 800,000 cards. Users occasionally receive notifications that some data may have been compromised. Nor does it eliminate fraud. According to official figures, Estonians were scammed out of €5 million (£4.4 million) last year, through a familiar mixture of fraudulent calls, phishing messages and fake websites in which they were tricked into handing over passwords. But in two decades no one has decrypted the technology itself. Indeed, the system has become so embedded in Estonian life it is hard to find anyone who knocks it. Its few critics are mostly to be found among the ranks of the populist right-wing Conservative People’s Party of Estonia (Ekre), which is expected to take third place in the election. The party has expressed scepticism about the security of online voting — and its supporters are less likely to use it than supporters of mainstream parties. But although it spent two years in a coalition government from 2019, Ekre did nothing to dismantle the system.


Microsoft trains ChatGPT to control robots

The team plans to leverage the platform's ability to develop coherent and grammatically correct responses to various prompts and questions and see if ChatGPT can think beyond the text and reason about the physical world to help with robotics tasks. "We want to help people interact with robots more easily, without needing to learn complex programming languages or details about robotic systems." The key obstacle in the way for a language model based on AI is to solve problems considering the laws of physics, the context of the operating environment, and how the robot’s physical actions can change the state of the world. Even though ChatGPT can do a lot alone, it still needs some help. Microsoft has released a series of design principles, including unique prompting structures, high-level APIs, and human feedback via text. These models can be used to guide language models toward solving robotics tasks. The firm is also introducing PromptCraft, an open-source platform where anyone can "share examples of prompting strategies for different robotics categories."


Job Interview Advice for Junior Developers

Remember the audience you are talking to. Human resources personnel are very different to the dev guy sitting in the interview with them. You won’t be working with anyone in HR, but they are the gatekeepers. In smaller outfits, there will probably be a manager type and a dev guy. It doesn’t hurt to empathize with how their views differ. Make sure you are comfortable with all the parts of the agile development cycle, not just the bits your team happened to pick up in your last project. I have never used pull requests, but they are de rigueur in may places. Similarly code reviews. Retrospectives might seem pointless to you, but how else do you improve a process? Don’t assume that what your team called DevOps or Kanban is exactly the same as what everyone else does. One of the biggest snags in an interview — I think the most common — is when the interviewee shows little enthusiasm for an aspect of development the interviewer happens to thinks is important. Your negative hot take on a mainstream process, especially without experience weighing in behind you, may well be seen as a sign of rigidity.


Cyberattacks hit data centers to steal information from global companies

Resecurity identified several actors on the dark web, potentially originating from Asia, who during the course of the campaign managed to access customer records and exfiltrate them from one or multiple databases related to specific applications and systems used by several data center organizations. In at least one of the cases, initial access was likely gained via a vulnerable helpdesk or ticket management module that was integrated with other applications and systems, which allowed the threat actor to perform a lateral movement. The threat actor was able to extract a list of CCTV cameras with associated video stream identifiers used to monitor data center environments, as well as credential information related to data center IT staff and customers, Resecurity said. Once the credentials were collected, the actor performed active probing to collect information about representatives of the enterprise customers who manage operations at the data center, lists of purchased services, and deployed equipment.


Avoiding vendor lock-in and the data gravity trap

Today, enterprises are adopting hybrid cloud and multicloud strategies to avoid vendor lock-in and the data gravity trap. In fact, many enterprises are choosing vendors who provide cloud-agnostic services that are also open source to benefit from the most freedom and to avoid vendor lock-in. In addition, working with vendors with large partner ecosystems can help reduce the risks of lock-in. Open standards are the antidote to proprietary technology. Open standards allow users to move freely between vendors—to mix and match or integrate—with competing vendors to create their own solution. They allow you to compose your service or system freely and liberate you from the proprietary interfaces of a vendor. Open standards came about from our prior experiences of vendor lock-in. If we forget that history, we are condemned to repeat it. ... While the general movement is toward the concept of “composable IT,” where software-defined infrastructure and application components interoperate seamlessly to make businesses nimble, there are times when vendor lock-in makes sense.


Boards tapping CIOs to drive strategy

“CIOs are playing a leading role in orchestrating transformation and are stepping up in response to the changing industry dynamics,” said Logicalis CEO Bob Bailkoksi, “yet they are faced with challenges to navigate, including a potential recession and talent shortages.” “In addition to this, they are experiencing increased pressure to deliver digital-based outcomes for their organisations, giving them more exposure to their boards and requiring a different way of operating.” In many companies, that “different way” seems to be that their traditional remit – implementing technology to support business strategy – has changed dramatically: 41 per cent reported having some level of responsibility for business strategy as well as the technology to deliver it, and 80 per cent said business strategy will become a bigger part of their role in the next two years. Innovation and digital transformation are two of four primary areas of focus for CIOs identified in the study – the others being strategy and the reimagination of service partnerships, which is expected to see three-quarters of CIOs spending more on IT outsource management this year than last.


How Companies Are Using Data to Optimize Manufacturing Operations

Real-time agility requires combining data from multiple sources to create new insights for use cases like machine learning. Manufacturers are familiar with streaming data today, but this is nowhere near the point of saturation. Everything from supply chain shortages to COVID-19 sick time and weather disruptions has made the simple task of getting shipments from point A to B complicated and uncertain. However, companies that consolidate data from asset-based sensors, predictive maintenance algorithms, and ordering and staffing systems (such as enterprise resource management, supply chain management and human resources software), can use it to respond much more quickly, and keep assets operating at maximum efficiency. Industrial asset management, especially in transportation, allows companies to showcase the value of real- or near-real-time data in a range of scenarios, from airlines to railroads. Large-scale assets deployed in the past few years generate a tremendous amount of streaming data. 


3 ways to retool your architecture to reduce costs

When you have one team developing and operating one application deployed on one server, it's not difficult to figure out the infrastructure, labor, and software costs of developing the application. However, when you have thousands of applications, it becomes a real mess to trace every penny flowing into your application's TCO. ... The reality is far more complicated. For platforms, especially hybrid cloud, some costs may go away immediately, while some will be redistributed to new application portfolios. The labor costs associated with the application will remain unless you actually eliminate that labor; otherwise, those costs will be redistributed across the other applications the team supports. In other words, labor costs will increase for those other applications. ... There are two views of measuring excess capacity. First, to decrease unused reservations by application owners, you must allocate total platform cost by total reservations. However, for platform owners to understand excess capacity compared to the total built capacity, you must allocate the total platform cost by the total built capacity.


4 steps to supercharge IT leaders

IT leaders understand well that it’s not just willpower that leads to success; it’s also “way power,” or the “how” of achieving your goals. Being able to find a pathway forward is just the starting point. Leaders today must be able to pivot in real time when a new opportunity, obstacle, or crisis surfaces. The more options you have, the more likely you are to achieve your goals. Consider the example of a complex digital initiative. Let’s say you’ve identified a way forward plus a backup plan for your top purposes. Implementation will almost certainly be nonlinear. What happens when you hit obstacles? ... Many things cloud or limit our vision or even blindside us. We are all affected by our personalities, beliefs, and assumptions. We are also affected by the data that we choose to rely on. How can you be more confident that what you’re seeing is real? Start by taking another look at your external priorities. You might be exaggerating or discounting the opportunities or threats that you see. ... How confident are you about timing for essential deliverables on your critical path? 



Quote for the day:

"The speed of the leader is the speed of the gang." -- Mary Kay Ash

Daily Tech Digest - February 25, 2023

Beyond Chess and the Art of Enterprise Architecture

In EA orthodoxy, the common future state goal is also seen as contributing to the coherence of design choices. As this is clearly not enough, architecture principles have been espoused as a way to make future design choices ‘good’. Architecture principles, are like simple tactical ‘gameplay rules’ in chess. My favourite chess gameplay rule example is that one can give each piece in chess a weight (queen is 9, castle is 5, bishop and knight is 3 and pawn is 1) and that if you exchange material then you should take the exchange when you win more points than you lose. That is the chess equivalence of an architecture principle: it defines ‘the choice you should make’. Following sych a rule blindly is a sure way to lose a game of chess, though. Simply focusing on getting more material, will not make you win, you might not even end up with more material. But what is true is that winners do in general end up with more pieces than losers. This tactical gameplay rule of chess are thus descriptive and not prescriptive. 


Why Are My Employees Integrating With So Many Unsanctioned SaaS Apps?

It's in the interest of the vendors to get users hooked quickly on any cool, new functionality by removing all friction to adoption, including bypassing IT and security team reviews in the process. The hope is that even if security teams grow wise to the use of an application, it will prove too popular with business users and too critical to business operations to remove it. However, making adoption overly easy can also lead to a proliferation of unused, abandoned, and exposed apps. Once an app is rejected during a proof of concept (PoC), is abandoned due to waning interest, or the app owner leaves the organization, it can often remain active, providing an expanded and unguarded attack surface that places the organization and data at elevated risk. While it's important to educate your business users on SaaS security best practices, it's even more important to fight indiscriminate SaaS sprawl by teaching them to evaluate more critically the siren song of SaaS vendors about easy deployment and financial incentives.


Meta’s New Feature will Make You Rethink Your Online Presence

Digital identity is touted to be the most essential component in how we will operate in the social media world, or the metaverse, in years to come. “Subscription identity verification could be even more important in the metaverse where it is critical to know that the person you are interacting with is precisely who they claim to be,” Louis Rosenberg told AIM. During the 2023 World Economic Forum (WEF), the argument in favour of the proposition was overwhelmingly affirmative, and it went as follows: “To achieve this frictionless state, good system-wide interoperability of the metaverse should consider interests such as privacy, security and safety. Given the borderless nature of the metaverse, multistakeholder and multilateral collaboration will be required to reach consensus on design choices, best practices, standards and management activities.” The multiple stakeholders mentioned above include not only creators and users, but also governments, businesses and civil society.


As Microsoft embraces AI, it says sayonara to the metaverse

First, if you’ve got big metaverse plans involving Microsoft-related technologies, it’s time to re-examine what you’re doing. Microsoft will support its metaverse only around the edges, meaning to a certain extent, you’re on your own. Take that into account in deciding your next steps. It also means you should be careful about buying too much or too quickly into Microsoft’s AI promises. Yes, the company appears to be going all in on the technology, with billions of dollars in investments and high-profile pronouncements about its use in Bing. It claims AI will essentially be built into everything it does from now on. There’s no doubt AI will affect much of what the company does in the years ahead. But remember, it was just two years ago that Nadella could not “overstate how much of a breakthrough” the metaverse was. And now Microsoft has all but abandoned it. Be cautious when it comes to AI. You need to invest in it, but don’t go all in until it’s clear Microsoft will itself be all in for many years to come.


Don’t let buzzwords drive your cloud architecture

Why am I bringing it up now if it’s a known and long-time problem? I see BOA drive most cloud architectures these days, with enterprises paying the price for expensive mistakes. As I covered here, these architectures “work,” but because they function at a much lower level of cost efficiency, enterprises typically spend two to three times more than a better-optimized solution. Just look at the misapplications of containers to see many examples of this expensive problem. ... Cloud computing didn’t fail the business; those who created the cloud solutions failed the business. Instead of finding the most cost-effective and optimized cloud architecture, they took a BOA approach that started with answers before there was a clear understanding of the questions. I’ve created many IT architecture concepts and many buzzwords in my career. The danger comes when those concepts and buzzwords are misapplied and we mistakenly blame the concept, not the person who misapplied it.


How Can Quantum Entanglement Be Used For Secure Communication?

There is little doubt that quantum technology will have a huge impact on cybersecurity and cryptography. Already, we can see government agencies across the globe preparing enterprises for “Q-Day”, a time when quantum computers will be able to use Shor’s algorithm to break all public key systems that use integer factorization-based cryptography. As a procedure, quantum communication entails encoding information in quantum states, called qubits, rather than in the classical binary tradition of “zeros and ones”, taking advantage of the special properties of these quantum states to guarantee security. Most commonly, photons are used for this. Quantum Key Distribution (QKD) is a proven theoretical security that is future-proof yet requires trusted nodes for long distances. Presently, a single QKD link is limited to a few 100 km with a sweet spot in the 20–50 km range. Work on quantum repeaters and satellite-QKD is ongoing to extend the range. 


Data governance holds the key to integrity and security

Data governance provides the principles, policies, processes, framework, tools, and metrics required to manage data at all levels, from creation to consumption, effectively. When implemented, data governance establishes a pathway for integrity by having effective quality management and in meeting compliance standards in all applicable privacy and security measures. Through such features, data is made trustworthy, which is key in any IT transformation. ... In an age of increasing complexity in all aspects of data technology and architecture, federal agencies and military IT departments need to simplify their approach to data governance to succeed. The ideas discussed here can simplify and significantly improve any data governance initiative. By leveraging data virtualization technology, agencies can take advantage of these ideas and transform their approaches to data management and data governance to achieve more efficient operations and secure data access.


For Effective Governance, Start with Why

It’s critical to take a collaborative approach when creating program standards, so enlisting the support from colleagues in departments like legal, HR, cybersecurity, risk management, and safety will aid in buy-in and deconfliction, and you might be able to even borrow similar style, terminology, and processes from them. After all, few of us can claim our standards, procedures and governance products are truly proprietary. They are usually blends of other products from various organizations which can apply best for our industry, organization, location, and culture. Consult codified standards from organizations like ISO, ANSI, NIST, and ASIS. Benchmark drafts with peer organizations. ... Further, you will likely be sharing risk with various departments, so it’s critical to understand existing jurisdictions. Finally, to have any status, this document should be reviewed and approved by senior leadership. There will be challenges to your authority and you will need support to defend against this.


These Experts Are Racing To Protect AI From Hackers

Concerns about attacks on AI are far from new but there is now a growing understanding of how deep-learning algorithms can be tricked by making slight -- but imperceptible -- changes, leading to a misclassification of what the algorithm is examining. "Think of the AI system as a box that makes an input and then outputs some decision or some information," says Desmond Higham, professor of numerical analysis at University of Edinburgh's School of Mathematics. "The aim of the attack is to make a small change to the input, which causes a big change to the output." ... This recognition process isn't an error; it happened because humans specifically tampered with the image to fool the algorithm -- a tactic that is known as an adversarial attack. "This isn't just a random perturbation; this imperceptible change wasn't chosen at random. It's been chosen incredibly carefully, in a way that causes the worst possible outcome," warns Higham.


Security with ChatGPT: What Happens When AI Meets Your API?

Right now, the brightest minds in cybersecurity are envisioning how to employ better AI and machine learning (ML) security. Such as CyberArk researchers, who recently discovered how to easily trick ChatGPT into creating polymorphic, malicious malware. At the heart of AI’s cybersecurity concerns is the proliferation of APIs (application programming interfaces). While developers are working to simplify and accelerate architecting, configuring and building serverless applications with the help of new AI systems like DeepMind’s AlphaCode, problems arise as ML becomes responsible for generating and executing code. ... So far, with OpenAI’s ChatGPT, the results look good and may even work well. That said, many results aren’t perfect and could incorporate flaws that aren’t evident upon initial review. Whether the coder is an AI system or a human, organizations still need a strong approach to application security that will catch vulnerabilities in code and provide suggestions on how to remediate them.



Quote for the day:

"Give away what you most wish to receive." -- Robin Sharma

Daily Tech Digest - February 23, 2023

Trends in Data Governance in 2023: Maturation Toward a Service Model

Organizations will increasingly adopt a Data Governance service model as they increase implementations of AI technologies. The “EU and U.S. plan to impose new regulations to protect consumers and impact how algorithms can ingest, use, transform, and make recommendations based on datasets. Companies have a short time to ramp up their Data Governance responses to AI because many algorithms adjust inputs and outputs in real time. Organizations need more Data Governance preparation, as only 30% of a McKinsey AI study respondents recognized potential legal risks as relevant. The firms, blinded to the importance of AI regulations, will face increased pressure to adapt their Data Governance approaches by the end of 2023. EU’s draft AI regulations promise to impose more considerable fines on companies who fail to comply, 6% of their global revenue, instead of the 4% levied by the GDPR. Consequently, worker adoption of Data Governance updates, in preparation for AI regulations, with their engagement and feedback, will play a crucial role in 2023. 


Sci-fi magazine halts new submissions after a surge in AI-written stories

Clarke acknowledged there are tools available for detecting plagiarized and machine-written text, but noted they are prone to false negatives and positives. OpenAI recently released a free classifier tool to detect AI-generated text, but also noted it was "imperfect" and it was still not known whether it was actually useful. The classifier correctly identifies 26% of AI-written text as "likely AI-written" -- its true positive rate. It incorrectly identifies human-written text as AI-written 9% of the time -- its false positive. Clarke outlines a number of approaches that publishers could take besides implementing third-party detection tools, which he thinks most short fiction markets can't currently afford. Other techniques could include blocking submissions over a VPN or blocking submissions from regions associated with a higher percentage of fraudulent submissions. "It's not just going to go away on its own and I don't have a solution. I'm tinkering with some, but this isn't a game of whack-a-mole that anyone can "win." The best we can hope for is to bail enough water to stay afloat," wrote Clarke.


Pairing AI with Tech Pros: A Roadmap to Successful Implementation

“The technology can also automatically check the quality and interpret data where metadata is not available, interpret tabular data and summarize them with natural text and jointly interpret image, text, and tabular data,” he says. Krishna cautions that while generative AI has exciting potential, the recent focus on the technology has also reinforced the importance of responsible AI. “Going forward, organizations will be using AI methodologies to make decisions for their customers, employees, vendors and everyone associated with them,” he says. “A responsibility charter needs to be sponsored by C-suite leaders and developed through dynamic and consistent discussions led by the leaders in compliance, risk and data analytics.” Lo Giudice adds it is important for organizational leaders and IT workers, for example software developers, to come together and decide which AI-based tools could be deployed and the strategy behind that deployment. “Developers are influencers of this, because if they get excited about it, it will win,” he says. 


Platforming the Developer Experience

With intuitive, self-service workflows and all the tools developers need, they rarely, if ever, have to think about ‘the how’ of getting their software into the hands of users. And this works if and when an organization does at least a couple of things right: The organization prioritizes the developer experience and empowers other parts of the organization to answer the question, How can we create the optimal developer experience? The organization puts resources behind understanding and building the best developer experience–and that’s where both the developer platform and DevOps teams as “fixers” ideas emerge. Does this mean the “optimal experience” can’t be optimized? Does that mean developers cannot have input into their own (or more general) developer experience(s)? No. In fact, part of what makes the developer platform idea compelling is that developers don’t have to weigh in or make decisions on the platform or tooling. Still, it’s possible to let them have that freedom if the team or organization wants to. Bottom line: There is no one-size-fits-all developer platform any more than there is a single developer experience. 


How IT professionals can change careers to cyber security

While most IT professionals will have these skills on a basic level, many will only understand them as needed for their own day to day work, Teale says. Therefore, additional training is sometimes necessary. Many IT professionals may not need to fork out for a cyber security degree although certifications might be a helpful way forward. Basic foundational books and courses can offer some guidance, and an apprenticeship or course from a certified body might make sense for IT professionals who are looking to switch early in their careers, Finch says. ... There are a number of entry level courses available, such as CISMP or CompTIA, says Freha Arshad, managing director, Accenture Security in the UK. “All of the major cloud service providers offer security courses for varied levels and skill sets. With enterprises increasingly focused on the cloud, this area is also a good place to start.” In addition, says McQuade, there are free resources online to support self-learning: “HackXpert and TryHackMe provide training labs, while Cybrary offers a library of helpful videos, labs and training exams. ...”


CISOs struggle with stress and limited resources

The lack of bandwidth and resources is not only impacting CISOs, but their teams as well. ... Relentless stress levels are also affecting recruitment efforts with 83% of CISOs admitting they have had to compromise on the staff they hire to fill gaps left by employees who have quit their job. More than a third of the CISOs surveyed said they are either actively looking for or considering a new role. “The results from our mental health survey are devastating but it’s not all doom and gloom. Our research found that CISOs know exactly what they need to reduce stress levels: more automated tools to manage repetitive tasks, better training, and the ability to outsource some work responsibilities,” said Eyal Gruner, CEO, Cynet. “One of the most eye-opening insights from the report was the fact that more than 50% of the CISOs we surveyed said consolidating multiple security technologies on a single platform would decrease their work-related stress levels,” Gruner added.


Making Risk Management for Agile Projects Effective

Agile claims to be risk-driven and based on its implicit practices—it lends itself to an adaptive risk management style. For instance, the adaptability of sprint planning is a response to uncertainty, “biting off a small chunk at a time” to eventually deliver the finished solution. Due to its inherent nature, Agile can mitigate some risk that occurs during the sprint cycle, but this is not the only risk that may occur during a project’s lifespan. For example, in larger enterprises, there is more risk related to the external, organizational and project environments, including corporate reputation, project financing, user adoption of business changes and regulatory compliance. Management of this type of “project” risk is not addressed in most Agile literature, which focuses on risk that may occur at the sprint level. One recent proposal to address this limitation is to adopt an Agile risk management process that includes tailoring Agile methodologies to include project and enterprise risk management approaches in line with the risk context for the project.


Robotic Process Automation: Confluence of Automation and AI

According to Deloitte, it can lead to improved service, fewer mistakes, increased audibility, increased productivity, and lower costs. It makes it possible to have a workforce that is automated in a variety of ways around the clock. More sophisticated tools are taking the place of the outdated methods that relied on Excel sheets and macros. Additionally, functions like dashboarding, workflow, and proactive system and process monitoring are becoming increasingly important components of technology infrastructures thanks to these new tools. Additionally, these “new” tools frequently need to interact with older systems, which is not possible alway. To extract, format, shape, and distribute the data in a way that a downstream system can consume, necessitates human interaction. This process is being automated with RPA in a more controlled, efficient, and less labor-intensive manner. RPA bots can, for the sake of simplicity, completely automate human actions like opening files, entering data, and copy-pasting fields.


The Future of Network Security: Predictive Analytics and ML-Driven Solutions

ML-driven network security solutions in cybersecurity refer to the use of self-learning algorithms and other predictive technologies (statistics, time analysis, correlations etc.) to automate various aspects of threat detection. The use of ML algorithms is becoming increasingly popular for scalable technologies due to the limitations present in traditional rule-based security solutions. This results in the processing of data through advanced algorithms that can identify patterns, anomalies, and other subtle indicators of malicious activity, including new and evolving threats that may not have known bad indicators or existing signatures. Detecting known threat indicators and blocking established attack patterns is still a crucial part of overall cyber hygiene. However, traditional approaches using threat feeds and static rules can become time-consuming when it comes to maintaining and covering all the different log sources. In addition, Indicators of Attack (IoA) or Indicators of Compromise (IoC) may not be available at the time of an attack or are quickly outdated.


1 in 4 CISOs Wants to Say Sayonara to Security

CISOs aren't necessarily running down alerts constantly the way their employees are, but they're overloaded with other career fatigue factors. "CISOs are constantly trying to balance high expectations against an absence of the tools needed to meet those expectations," Gartner analysts wrote in the prediction piece. "Compliance-centric cybersecurity programs, significantly low executive support, and subpar industry-level maturity are all indicators of an organization that does not view security risk management as critical to business success." One of the big factors that could have CISOs reconsidering their career trajectory in cybersecurity altogether is the fear about what will happen to their professional reputation if their company gets breached, says Diana Kelley, a veteran cybersecurity executive and co-founder and CSO of Cybrize, a cybersecurity workforce planning platform. She says CISOs and CSOs worry about "having their name dragged through the mud" after a breach, or even facing criminal charges, which feels more possible in the fallout from the conviction of Uber's Joe Sullivan last year.



Quote for the day:

"Leadership is a two-way street, loyalty up and loyalty down." -- Grace Murray Hopper

Daily Tech Digest - February 20, 2023

How quantum computing threatens internet security

“Basically, the problem with our current security paradigm is that it relies on encrypted information and decryption keys that are sent over a network from sender to receiver. Regardless of the way the messages are encrypted, in theory, someone can intercept and use the keys to decrypt apparently secure messages. Quantum computers simply make this process faster,” Tanaka explains. “If we dispense with this key-sharing idea and instead find a way to use unpredictable random numbers to encrypt information, the system might be immune. [Muons] are capable of generating truly unpredictable numbers.” The proposed system is based on the fact that the speed of arrival of these subatomic particles is always random. This would be the key to encrypt and decrypt the message, if there is a synchronized sender and receiver. In this way, the sending of keys would be avoided, according to the Japanese team. However, muon detection devices are large, complex and power-hungry, limitations that Tanaka believes the technology could ultimately overcome.


Considering Entrepreneurship After a Successful Corporate Career?

Here Are 3 Things You Need to Know.Many of you may be concerned that a transition could alienate your audience and force you to wait before making a move. But this is a common misconception rooted in the idea that your personal brand reflects what you do professionally. At Brand of a Leader, we help our clients shift their thinking by showing them that their personal brand is who they are, not what they do. The goal of personal brand discovery is to understand your essence and package it in a way that appeals to others. Your vocation is only one of your key talking points, and when you pivot, you simply shift those points while maintaining the essence of your brand. So, when should you start building your personal brand? The answer is simple: the sooner, the better. Building a brand takes time — time to build an audience, create visibility and establish associations between your name and consistent perceptions in people's minds. Starting sooner means you'll start seeing results faster.


Establish secure routes and TLS termination with wildcard certificates

By default, the Red Hat OpenShift Container Platform uses the Ingress Operator to create an internal certificate authority (CA) and issue a wildcard certificate valid for applications under the .apps subdomain. The web console and the command-line interface (CLI) use this certificate. You can replace the default wildcard certificate with one issued by a public CA included in the CA bundle provided by the container userspace. This approach allows external clients to connect to applications running under the .apps subdomain securely. You can replace the default ingress certificate for all applications under the .apps subdomain. After replacing the certificate, all applications, including the web console and CLI, will be encrypted using the specified certificate. One clear benefit of using a wildcard certificate is that it minimizes the effort of managing and securing multiple subdomains. However, this convenience comes at the cost of sharing the same private key across all managed subdomains.


Overcoming a cyber “gut punch”: An interview with Jamil Farshchi

Your biggest enemies in a breach are time and perfection. Everyone wants everything done in a split second. And having perfect information to construct perfect solutions and make perfect decisions is impossible. Time and perfection will ultimately crush you. By contrast, your two greatest allies are communication and optionality. Communication is being able to lay out the story of where things are, and to make sure everyone is rowing in the same direction. It’s being able to communicate the current status, and your plans, to regulators—and at the same time being able to reassure your customers and make sure they have confidence that you’re going to be able to navigate to the other side. Optionality is critical, because no one makes perfect decisions in this kind of firefight. Unless you’re comfortable making decisions that might not be right at any given point in time, you’re going to fail. [As a leader,] you need to frame up a program and the decisions you’re making in such a way that you’re comfortable rolling them back or tailoring them as you learn more, and as things progress.


7 reasons to avoid investing in cyber insurance

Two things organizations might want to consider right off the bat when contemplating an insurance policy are the cost to and benefit for the business, SecAlliance Director of Intelligence Mick Reynolds tells CSO. “When looking at cost, the recent spate of ransomware attacks globally has seen massive increases in premiums for firms wishing to include coverage of such events. Renewal quotes have, in some cases, increased from around £100,000 ($120,000) to over £1.5 million ($1.8 million). Such massive increases in premiums, for no perceived increase in coverage, are starting now to be challenged by board risk committees as to the overall value they provide, with some now deciding that accepting exposure to major cyber events such as ransomware is preferable to the cost of the associated policy.” As for benefits to the business, insurance is primarily taken out to cover losses incurred during a major cyber event, and 99% of the time these losses are quantifiable and relate predominantly to response and recovery costs, Reynolds says.


The importance of plugging insurance cyber response gaps

The insurance industry is a lucrative target as organisations hold large amounts of private and sensitive information about their policy holders who, rightfully so, have the expectation of their data being kept safe and secure. This makes it no surprise that the industry is a key target for cyber criminals due to the massive disruption it can cause and the potential high financial reward on offer. Research shows that 82 per cent of the largest insurance carriers were the focus of ransom attacks in 2022. It is expected that the insurance industry will only become a more favourable target, and these types of disruptions will become increasingly severe. The insurance industry is one that has embraced innovation and new forms of technology in its practices over recent years in order to offer their customers a seamless experience. In doing so, alongside the onset of remote working catalysed by the pandemic, they have increased their threat surface. ... These are just the tip of the iceberg, so when cyber criminals look to exploit data, the insurance industry is a primary target due its huge customer base.


Value Chain Analysis: Best Practices for Improvements

To stay competitive, organizations must ensure that they have picked the right partners for each of the functions in the value chain, and that appropriate value is captured by each participant. “In addition to ensuring each participant’s value and usefulness in the chain, value chain analysis enables organizations to periodically verify that functions are still necessary, and that value is being delivered efficiently without undue waste such as administrative burden, communications costs or transit or other ancillary functions,” he says. Business leaders and IT leaders like the chief information officer and chief data officer must prove that they are benefiting the bottom line. While it is time consuming, value chain analysis is a key method to examine company value -- an essential practice during times of high stakes and economic uncertainty. Jon Aniano, senior vice president, Zendesk, adds running a full VCA requires analyzing and tracking a massive amount of data across your entire company.


Cybersecurity takes a leap forward with AI tools and techniques

“An effective AI agent for cybersecurity needs to sense, perceive, act and adapt, based on the information it can gather and on the results of decisions that it enacts,” said Samrat Chatterjee, a data scientist who presented the team’s work. “Deep reinforcement learning holds great potential in this space, where the number of system states and action choices can be large.” DRL, which combines reinforcement learning and deep learning, is especially adept in situations where a series of decisions in a complex environment need to be made. Good decisions leading to desirable results are reinforced with a positive reward (expressed as a numeric value); bad choices leading to undesirable outcomes are discouraged via a negative cost. It’s similar to how people learn many tasks. A child who does their chores might receive positive reinforcement with a desired playdate; a child who doesn’t do their work gets negative reinforcement, like the takeaway of a digital device.


9 ways ChatGPT will help CIOs

“ChatGPT is very powerful out of the box, so it doesn’t require extensive training or teaching to get up to speed and handle specific business processes. A valuable initial business application for ChatGPT should be directed towards routine tasks, such as filling out a contract. It can effectively review the document and answer the necessary fields using the data and context provided by the organization. With that said, ChatGPT has the potential to shoulder administrative burdens for CIOs quickly, but it’s important to regularly measure the accuracy of its work, especially if an organization plans to use it regularly. The best way for CIOs to get started with ChatGPT is to take the time to grasp how it would work within the context of their organization before rushing to widespread adoption. At these early stages of the technology, it’s better to let it complement existing workflows under close supervision instead of restructuring around it as an end-to-end solution. 


Art Of Knowledge Crunching In Domain Driven Design

Miscommunication during knowledge crunching sessions would have different reasons, such as cognitive bias, which is a type of error in reasoning, decision-making, and perception that occurs due to the way our brains perceive and process information. This type of bias occurs when an individual’s cognitive processes lead them to form inaccurate conclusions or make irrational decisions. For example, when betting on a roulette table, if previous outcomes have landed on red, then we might mistakenly assume that the next outcome will be black; however, these events are independent of each other (i.e., the probability of their results do not affect each other). Also, apophenia is the tendency to perceive meaningful connections between unrelated things, such as conspiracy theories or the moment we think we get it but actually, we do not get it. A good example of this could be an image sent from Mars that includes a shape on a rock that you might think is the face of an alien, but it’s just a random shape of a rock.



Quote for the day:

"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard