Daily Tech Digest - June 17, 2022

Revisit Your Password Policies to Retain PCI Compliance

PCI version 4.0 requires multifactor authentication to be more widely used. Whereas multifactor authentication had previously been required for administrators who needed to access systems related to card holder data or processing, the new requirement mandates that multifactor authentication must be used for any account that has access to card holder data. The new standards also require user’s passwords to be changed every 12 months. Additionally, user’s passwords must be changed any time that an account is suspected to have been compromised. A third requirement is that PCI requires users to use strong passwords. While strong passwords have always been required by the PCI standard, the password requirements are more stringent than before. Passwords must now be at least 15 characters in length, and they must include numeric and alphanumeric characters. Additionally, user’s passwords must be compared against a list of passwords that are known to be compromised. Another requirement of PCI 0 is that organizations must review access privileges every six months to make sure that only those who specifically require access to card holder data are able to access that data.


Making the world a safer place with Microsoft Defender for individuals

Today’s sophisticated cyber threats require a modern approach to security. And this doesn’t apply only to enterprises or government entities—in recent years we’ve seen attacks increase exponentially against individuals. There are 921 password attacks every second.1 We’ve seen ransomware threats extending beyond their usual targets to go after small businesses and families. And we know, as bad actors become more and more sophisticated, we need to increase our personal defenses as well. That is why it is so important for us to protect your entire digital life, whether you are at home or work—threats don’t end when you walk out of the office or close your work laptop for the day. We need solutions that help keep you and your family secure in how you work, play, and live. That’s why I’m excited to share the availability of Microsoft Defender for individuals, a new online security application for Microsoft 365 Personal and Family subscribers. We believe every person and family should feel safe online. This is an exciting step in our journey to bring security to all and I’m thrilled to share with you more about this new app, available with features for you to try today.


Data Is Vulnerable to Quantum Computers That Don’t Exist Yet

To stay ahead of quantum computers, scientists around the world have spent the past two decades designing post-quantum cryptography (PQC) algorithms. These are based on new mathematical problems that both quantum and classical computers find difficult to solve. In January, the White House issued a memorandum on transitioning to quantum-resistant cryptography, underscoring that preparations for this transition should begin as soon as possible. However, after organizations such as the National Institute of Standards and Technology (NIST) help decide which PQC algorithms should become the new standards the world should adopt, there are billions of old and new devices that will need to get updated. Sandbox AQ notes that such efforts could take decades to implement. Although quantum computers are currently in their infancy, there are already attacks that can steal encrypted data with the intention to crack it once codebreaking quantum computers become a reality. Therefore, the Sandbox AQ argues that governments, businesses, and other major organizations must begin the shift toward PQC now.


Developer, Beware: The 3 API Security Risks You Can’t Overlook

By design, the majority of APIs send data from the data store to the client. Excessive data exposure results when the API has been designed to return large amounts of data to the client. Attackers can collect or harvest sensitive data from such API responses. For example, a group fitness app displays the home location of the group’s participants. The locations are displayed on a map using the latitude and longitude of each athlete. A well-designed API is intended to return only the latitude and longitude of each athlete. Conversely, a poorly designed API returns user information about each athlete, including their full name, address, email, phone number, latitude and longitude, and more. This is an example of excessive data exposure as the API is returning more data than it was designed to do. This might occur when a poorly designed API pulls a record from the database and returns it to the client in its entirety, exposing all the data in the file. In this situation, the true business use case was not fully understood during development.


Apple finally embraces open source

Apple is open-sourcing a reference PyTorch implementation of the Transformer architecture to help developers deploy Transformer models on Apple devices. In 2017, Google launched the Transformers models. Since then, it has become the model of choice for natural language processing (NLP) problems. ... As a company, Apple behaves like a cult. Nobody knows what goes inside Apple’s four walls. For the common man, Apple is a consumer electronics firm unlike tech giants such as Google or Microsoft. Google, for example, is seen as a leader in AI, with top AI talents working for the company and has released numerous research papers over the years. Google also owns Deepmind, another company leading in AI research. Apple is struggling with recruiting top AI talents, and for good reasons. “Apple with its top-five rank employer brand image is currently having difficulty recruiting top AI talent. In fact, in order to let potential recruits see some of the exciting machine-learning work that is occurring at Apple, it recently had to alter its incredibly secretive culture and to offer a publicly visible Apple Machine Learning Journal,” said Dr author John Sullivan.


Early adopters position themselves for quantum advantage

Perhaps most significant, however, is funding for a series of collaborative projects aimed at demonstrating specific applications for today’s quantum computers. Following a call for proposals in the autumn, for each successful bid the NQCC will first work with the project team to analyse the use case, assess the requirements, and determine whether the application can be usefully tackled with current technologies. “The next stage would be to identify appropriate algorithms or develop new ones, and then run them on a physical quantum computer,” says Decaroli. “We can then benchmark the results against classical solutions and potentially across different quantum-computing platforms.” One crucial partner in the SparQ programme is Oxford Quantum Circuits (OQC), the only UK company to offer cloud-based access to a quantum computer. Its latest eight-qubit processor, named “Lucy” after the pioneering quantum physicist Lucy Mensing, was released on Amazon Web Services in February this year. “We are looking forward to working with end users in different industry sectors to provide access to our hardware,” commented Ilana Wisby, CEO of OQC.


How decentralization and Web3 will impact the enterprise

For one, over time, Web3 will almost certainly become a vital approach to the way our IT systems work. Decentralization is now a significant industry trend that will be insisted on by a growing number of tech consumers and businesses as well. Instead of storing information in our own databases and running code in parts of the cloud that we pay for or otherwise control, businesses will have to get used to relying on Web3 resources (data, compute, etc.) and sharing more of that control. Much of the important data we need to run our businesses will increasingly be kept in more private and protected places, stored in blockchain and other types of distributed ledgers. A rising share of our applications over time will be more akin to open source projects and run using smart contracts that all stakeholders can transparently view, verify, and agree to. Even our businesses will have strange new subsidiaries that are actually embodied entirely in code and run automatically on their own, using digital inputs from stakeholders. And this is just the beginning. The cryptographic systems and immutable transaction ledgers of Web3 have now stood enough of the test of time to prove out and show the way.


Blockchain's potential: How AI can change the decentralized ledger

When asked whether AI is too nascent a technology to have any sort of impact on the real world, he stated that like most tech paradigms including AI, quantum computing and even blockchain, these ideas are still in their early stages of adoption. He likened the situation to the Web2 boom of the 90s, where people are only now beginning to realize the need for high-quality data to train an engine. Furthermore, he highlighted that there are already several everyday use cases for AI that most people take for granted in their everyday lives. “We have AI algorithms that talk to us on our phones and home automation systems that track social sentiment, predict cyberattacks, etc.,” Krishnakumar stated. Ahmed Ismail, CEO and president of Fluid — an AI quant-based financial platform — pointed out that there are many instances of AI benefitting blockchain. A perfect example of this combination, per Ismail, are crypto liquidity aggregators that use a subset of AI and machine learning to conduct deep data analysis, provide price predictions and offer optimized trading strategies to identify current/future market phenomena


We don’t need another infosec hero

Instead of thinking of ourselves as heroes—we aren’t Wonder Woman, or Batman, or Superman—it’s time to think of ourselves as sidekicks. On a good day, we help someone else make wiser risk choices, and those choices result in more profitable outcomes for everyone. But it is someone else who is the hero; we just hold their cape and refill their utility pouch. How do we do that? It begins with some humility. Most people in our profession work in cost centers. To the rest of the company, we are a drag on the business, and while we like to talk about business enablement, our first goal has to be removing the business impediment we’ve become. Are you responsible for product security? Engage the software architects who write the code and teach them how to do their own safety and security reviews earlier in their process.  ... No matter what part of the business you support, start learning what they need to do to get the job done. Identify opportunities where you can get out of their way first, and then look for ways to help improve their processes to be faster and safer.


Entering the metaverse: How companies can take their first virtual steps

If the virtual world experiment is successful, it will be because of superior immersivity. Concerts, movies, sporting events and consumer experiences must offer interactivity and wholistic engagement that makes the real world appear dull and lacking in possibilities by comparison. While entertainment companies will more easily master the metaverse experience offered to audiences, brands and businesses in the vast majority of other industries will likely struggle to conceptualize and develop the level of immersivity that will be required to be effective. Healthcare, education and financial services could all prosper from virtual properties and offerings – medical professionals seeing patients and patients building communities of support, classrooms that are not confined to textbooks but bring subject matter to life for greater curiosity and stock markets with available real-time multidimensional metrics that make Bloomberg terminals appear outdated. These virtual theme parks of consumerism and participation allow for brand reinvention, offer the possibility for novel sources of revenue and obviously skew to a younger audience that may not have yet come across or interacted with these same brands in the real world.



Quote for the day:

"Good leaders make people feel that they're at the very heart of things, not at the periphery." -- Warren G. Bennis

Daily Tech Digest - June 16, 2022

High-Bandwidth Memory (HBM) delivers impressive performance gains

In addition to widening the bus in order to boost bandwidth, HBM technology shrinks down the size of the memory chips and stacks them in an elegant new design form. HBM chips are tiny when compared to graphics double data rate (GDDR) memory, which it was originally designed to replace. 1GB of GDDR memory chips take up 672 square millimeters versus just 35 square millimeters for 1GB of HBM. Rather than spreading out the transistors, HBM is stacked up to 12 layers high and connected with an interconnect technology called ‘through silicon via’ (TSV). The TSV runs through the layers of HBM chips like an elevator runs through a building, greatly reducing the amount of time data bits need to travel. With the HBM sitting on the substrate right next to the CPU or GPU, less power is required to move data between CPU/GPU and memory. The CPU and HBM talk directly to each other, eliminating the need for DIMM sticks. “The whole idea that [we] had was instead of going very narrow and very fast, go very wide and very slow,” Macri said.


3 forces shaping the evolution of ERP

If there was any hesitation about moving to cloud-based ERP, it was quashed as the COVID crisis erupted, and corporate workplaces became scattered across countless home-based offices. On-premises ERP is seen as “not as scalable as people thought,” says Sharon Bhalaru, partner at accounting and technology consulting firm Armanino LLP. “We’re seeing a move to cloud-based systems,” to support remote employees who need to perform HR, financial and accounting tasks remotely. ... Next-generation ERP platforms “give companies real-time transparency with respect to sales, inventory, production, and financials,” the Boston Consulting Group analysts wrote. “Powerful data-driven analytics enables more agile decisions, such as adjustments to the supply chain to improve resilience. Robust e-commerce capabilities help companies better engage with online customers before and after a sale. And a lean ERP core and cloud-first approach increase deployment speed.” ... Unprecedented and ongoing supply chain disruptions underscore the need for greater visibility, more predictable lead times, alternative supply sources, and faster response to disruptions.


Interpol arrests thousands in global cyber fraud crackdown

The operation’s targets included telephone scammers, long-distance romance scammers, email fraudsters and other connected financial criminals, identified through a prior intelligence operation using Interpol’s secure global comms network, sharing data on suspects, suspicious bank accounts, unlawful transactions, and communications means such as phone numbers, email addresses, fake websites and IP addresses. “Telecom and BEC fraud are sources of serious concern for many countries and have a hugely damaging effect on economies, businesses and communities,” said Rory Corcoran. “The international nature of these crimes can only be addressed successfully by law enforcement working together beyond borders, which is why Interpol is critical to providing police the world over with a coordinated tactical response.” Duan Daqi, added: “The transnational and digital nature of different types of telecom and social engineering fraud continues to present grave challenges for local police authorities, because perpetrators operate from a different country or even continent than their victims and keep updating their fraud schemes.


Is Cyber Essentials Enough to Secure Your Organisation?

If you are to have confidence in your security controls, you must implement defence in depth. This requires a holistic approach to cyber security that addresses people, processes and technology. Key aspects of this aren’t addressed in Cyber Essentials, such as staff awareness training, vulnerability scanning and incident response. Employees are at the heart of any cyber security system, because they are the ones responsible for handling sensitive information. If they don’t understand their data protection requirements, it could result in disaster. Meanwhile, vulnerability scanning ensures that organisations can spot weaknesses in their systems before a cyber criminal can exploit them. It’s a more advanced form of protection than is offered with secure configuration and system updates, enabling organisations to proactively secure their systems. Conversely, incident response measures give organisations the tools they need to respond after a security incident has occurred. Most of the damage caused by a data breach occurs after the initial intrusion, so a prompt and organised response can be the difference between a minor disruption and a catastrophe.


Imagining a world without open standards

The open standard makes portability easier for software developers, provides integrators with choice in the building blocks for solutions, and enables customers to focus on solving business problems rather than integration issues. Open standards eliminate the need for organizations to expend energy wrangling with competitors on defining how systems should work, giving them the space and time to focus on building and improving how those systems actually do work. The real benefits, though, are downstream of vendors: open standards mean that businesses can effectively communicate and collaborate both internally and with peers. They mean that the expertise built up by a professional in one market or business can be taken with them wherever they want to work. They mean that a lack of knowledge resources is not the barrier that prevents businesses from making the move towards better, more efficient ways of working. In imagining a world without open standards, then, the image is one of businesses constantly having to navigate between the walled gardens of different technology vendors, reskilling and rehiring as they do so, before they can even begin the serious work of delivering value from that technology.


Good Habits That Every Programmer Should Have

We can become good at a specific technology by working with a particular technology for a long time. How can we become an expert in a specific technology? Learning internals is a great habit that supports us to become an expert in any technology. For example, after working some time with Git, you can learn Git internals via the lesser-known plumbing commands. You can make accurate technical decisions when you understand the internals of your technology stack. When you learn internals, you will indeed become more familiar with the limitations and workarounds of a specific technology. Learning internals also helps us to understand what we are doing with programming every day. Motivate everyone to learn further about their tools’ internals! ... Sometimes, we derive programming solutions from example code snippets that we can find on internet forums. It’s a good habit to give credit to other programmers’ hard work when we use their code snippets, libraries, and tools, even though their licensing documents say that attribution is not required.


Reducing Cybersecurity Security Risk From and to Third Parties

There are a number of ways in which organizations may be able to obtain attack information from third parties, if they agree. Ideally, such requirements should be included in service agreements and partnership contracts for vendors, outsourcers, and partners, as listed in the article, “Using Contracts to Reduce Cybersecurity Risks.” Employment contracts, nondisclosure agreements and license agreements may also include requirements that protect organizations against third-party risk. While it is helpful to request vendors, outsourcers and partners to commit to risk reduction in the contractual terms and conditions, it is even more beneficial for an organization to have direct access to partners’ and suppliers’ security monitoring systems. ... More modern forms of protection monitor messages for origin and content and respond with information about unauthorized sources—as with IDSs—or preventive action—as with IPSs. Advancements in these systems include observation of unusual behavior and the use of artificial intelligence (AI) to determine threats.


How Upskilling Could Resolve The Cybersecurity Skills Gap

With a shortage of new candidates, upskilling provides the answer to the cybersecurity skills gap. And it brings multiple benefits for both employees and businesses. One of the first is that, ultimately, cybersecurity is everyone’s business. From the CEO to the new employee at home, everyone has a role to play in ensuring systems are robust in the face of a growing wave of attacks. While this does not mean that everyone in a company needs to be a cybersecurity professional, it does mean that everyone should be aware of the risks, how to spot potential vulnerabilities and attacks and the practical measures they must take to prevent them. However, it can also produce a supply of cybersecurity professionals. Waiting for qualified entrants to the jobs market will take too long and, in practice, it’s likely they will not be qualified for long! The cybersecurity environment changes so rapidly, the knowledge many graduates gain at the start of their course may not be relevant by the end. Instead, identifying existing staff with the soft skills,or power skills, to develop, adapt, and learn may be the quickest and easiest path to take.


12 tips for achieving IT agility in the digital era

“If your tech stack is streamlined, easy to access, and easy to use, your workforce can quickly respond to business or customer needs seamlessly,” says Fleetcor’s duFour. Key to this is getting a handle on application sprawl by rationalizing the IT portfolio. Voya Financial’s simplification journey began with such an effort, a process that reduced its application footprint by 17% and its slate of technology tools by one quarter. The work continues as part of its cloud migration work. “This practice is instilling standards and discipline that will only help to ensure our environment remains uncluttered and contemporary for the long term,” Keshavan says. As a result, the IT group is faster and more flexible, recently deploying five new cloud services for data science and analytics developers to use within four hours —something that would have taken a cross-functional IT team several weeks to deploy in the past. Reining in application sprawl has also been valuable at Snow Software. “Oftentimes, companies and teams will invest in applications with similar purposes,” says Snow Software CIO Alastair Pooley. 


True Component-Testing of the GUI With Karate Mock Server

There’s an important reason why old-style end-to-end tests are often more expensive than needed: you tend to test paths that are not relevant to the frontend logic. Each of these adds to the total test suite run. Consider a web application for your tax return. The user journey in this non-trivial app consists of submitting a series of questionnaires, their content customized depending on what you answered in previous steps. There is likely some logic on the frontend to manage the turns in that user journey, but the number-crunching over your sources of income and deductibles surely happens on the backend. You don’t need a GUI test to validate the correctness of those calculations. With a mock backend that would be entirely pointless. You set it up to tell the frontend that the final amount to pay is 12600 Euros. You can test that this amount is properly displayed, but there’s no testing its correctness. All the decisions are made (and hopefully tested) elsewhere, so we can treat it as a hardcoded test fixture.



Quote for the day:

"Leaders begin with a different question than others. Replacing who can I blame with how am I responsible?" -- Orrin Woodward

Daily Tech Digest - June 15, 2022

Software Engineering - The Soft Parts

Transferable skills are those you can take with you from project to project. Let's talk about them in relation to the fundamentals. The fundamentals are the foundation of any software engineering career. There are two layers to them - macro and micro. The macro layer is the core of software engineering and the micro layer is the implementation (e.g. the tech stack, libraries, frameworks, etc.). At a macro level, you learn programming concepts that are largely transferable regardless of language. The syntax may differ, but the core ideas are still the same. This can include things like: data-structures (arrays, objects, modules, hashes), algorithms (searching, sorting), architecture (design patterns, state management) and even performance optimizations. These are concepts you'll use so frequently that knowing them backwards can have a lot of value. At a micro level, you learn the implementation of those concepts. This can include things like: the language you use (JavaScript, Python, Ruby, etc), the frameworks you use (e.g. React, Angular, Vue etc), the backend you use (e.g. Django, Rails, etc), and the tech stack you use (e.g. Google App Engine, Google Cloud Platform, etc).


Why young tech workers leave — and what you can do to keep them

When employees seek a raise, what they’re really doing is shopping around and comparing offers from other companies, according to Sethi. And when it comes to salaries, companies must keep up with inflation, which is running at about 8% a year. But retaining employees requires more than just pay. Workers also want more support in translating environmental, social, and governance (ESG) considerations to their work. “Fulfilling work and the opportunity to be one’s authentic self at work also matter to employees who are considering a job change," Sethi said. "Pay is table stakes, but I also want my job to be meaningful and fulfilling, and I want to work at a place where I can be myself." Employees also want workplace flexibility. That, and human-centric work policies, can reduce attrition and increase performance. In fact, Gartner found that 65% of IT employees said that whether they can work flexibly affects their decision to stay at an organization.


A neuromorphic computing architecture that can run some deep neural networks more efficiently

Researchers at Graz University of Technology and Intel have recently demonstrated the huge potential of neuromorphic computing hardware for running DNNs in an experimental setting. Their paper, published in Nature Machine Intelligence and funded by the Human Brain Project (HBP), shows that neuromorphic computing hardware could run large DNNs 4 to 16 times more efficiently than conventional (i.e., non-brain inspired) computing hardware. "We have shown that a large class of DNNs, those that process temporally extended inputs such as for example sentences, can be implemented substantially more energy-efficiently if one solves the same problems on neuromorphic hardware with brain-inspired neurons and neural network architectures," Wolfgang Maass, one of the researchers who carried out the study, told TechXplore. "Furthermore, the DNNs that we considered are critical for higher level cognitive function, such as finding relations between sentences in a story and answering questions about its content." In their tests, Maass and his colleagues evaluated the energy-efficiency of a large neural network running on a neuromorphic computing chip created by Intel.


Why Your Database Needs a Machine Learning Brain

By keeping the ML at the database level, you’re able to eliminate several of the most time-consuming steps — and in doing so, ensure sensitive data can be analyzed within the governance model of the database. At the same time, you’re able to reduce the timeline of the project and cut points of potential failure. Furthermore, by placing ML at the data layer, it can be used for experimentation and simple hypothesis testing without it becoming a mini-project that requires time and resources to be signed off. This means you can try things on the fly, and not only increase the amount of insight but the agility of your business planning. By integrating the ML models as virtual database tables, alongside common BI tools, even large datasets can be queried with simple SQL statements. This technology incorporates a predictive layer into the database, allowing anyone trained in SQL to solve even complex problems related to time series, regression or classification models. In essence, this approach "democratizes" access to predictive data-driven experiences.


Understanding Low-code Development

If you are interested in getting started with low-code development, you will need a few things. First, you will need a low-code development platform. There are many options for you to select the right platform for you. You should analyze your requirements and explore all such options before choosing one. Several different options are available, so you should explore them to find one that meets your requirements. Once you have chosen a platform, you will need to learn how to use it. This may require some training or reading documentation. Finally, you will need some ideas for what you want to build. You are now ready to start low-code development. ... Here are some of the downsides of using Low-Code platforms for software development: Lack of Customization – Even though the pre-built modules of the low-code platforms are incredibly handy to work with, you can’t customize your application with them. You can customize low-code platforms but only to a limited extent. In most cases, low-code components are generic and if you want to customize your app you should invest time and effort in custom app development. 


Authentic Allyship and Intentional Leadership

Enterprises and leaders have to be intentional about their allyship. It has to be authentic allyship, not just surface allyship. I mention intentional allyship because a lot of times people think they’re an ally, and support diversity hires, but they’re just checking a box. We want intentional and authentic allyship. We need you to understand it goes beyond the person you’re helping. You’re helping the generation, not just one person. You think you’re only affecting the employee right in front of you, but that individual has a family and the next generation after them. You’re not just checking a box; you’re impacting destiny. When you’re an intentional ally, you think beyond the person in front of you, beyond the job application, beyond what you see. It’s not about you but what you’re doing for that person and that person’s generation to come. You need to really think about the step you’ll take when it comes to allyship. Make an impact – a lot of times we talk but don’t implement. Activate, implement, follow up. Don’t just implement and leave them there. Follow up – ask them how they’re doing, and if they know anyone else you can bring in. 


Software engineering estimates are garbage

Garbage estimates don’t account for the humanity of the people doing the work. Worse, they imply that only the system and its processes matter. This ends up forcing bad behaviors that lead to inferior engineering, loss of talent, and ultimately less valuable solutions. Such estimates are the measuring stick of a dysfunctional culture that assumes engineers will only produce if they’re compelled to do so—that they don’t care about their work or the people they serve. Falling behind the estimate’s promises? Forget about your family, friends, happiness, or health. It’s time to hustle and grind. Can’t craft a quality solution in the time you’ve been allotted? Hack a quick fix so you can close out the ticket. Solving the downstream issues you’ll create is someone else’s problem. Who needs automated tests anyway? Inspired with a new idea of how this software could be built better than originally specified? Keep it to yourself so you don’t mess up the timeline. Bludgeon people with the estimate enough, and they’ll soon learn to game the system.


Return to the office or else? Why bosses' ultimatums are missing the point

Employers who insist their staff return to the office full time are heading into increasingly dangerous territory. Skilled professionals, tech workers included, have so many opportunities available to them right now that it's difficult to see why they would sacrifice job satisfaction for their bosses. The outlook has never been better for knowledge workers – and indeed, workers more generally – across all industries. Not only are employers paying more to get the skills they need, but the breadth of flexible-working options for employees fed up with office life continues to grow. People aren't just working from home – they're working from wherever they choose, and whenever they choose. At the same time, significant momentum is gathering behind the introduction of a four-day work week, which could push the dynamic even further in favour of worker wellbeing while benefitting employers too. Companies who offer 100% pay for 80% of the hours will have a seriously powerful bargaining chip to play in the war for talent, and no company – regardless of their brand, product or credentials – will be untouchable.


UK needs to upskill to achieve quantum advantage

Discussing the pilot, Stephen Till, fellow at the Defence Science and Technology Laboratory (Dstl), an executive agency of the MoD, said: “This work with ORCA Computing is a milestone moment for the MoD. Accessing our own quantum computing hardware will not only accelerate our understanding of quantum computing, but the computer’s room-temperature operation will also give us the flexibility to use it in different locations for different requirements. “We expect the ORCA system to provide significantly improved latency – the speed at which we can read and write to the quantum computer. This is important for hybrid algorithms, which require multiple handovers between quantum and classical systems.” Piers Clinton-Tarestad, a partner in EY’s technology risk practice, said there is a general consensus that quantum computing will start becoming a reality in 2030. But pilot projects, such as the one being conducted at the MoD, and proof-of-concept applications can help business leaders to understand where quantum technology can be applied. 


Using automation to improve employee experience

The possibilities to improve the employee experience through automation and integration are endless. If you want to pilot something in your organization, poll your employees about what would be the most impactful. Where are they seeing sludge that drags down morale and slows business velocity? You and your IT team can plot each idea on an impact and effort prioritization matrix. Some suggestions may be easier to implement than you think, as many cloud services are already API-enabled, making automation straightforward. Once your team implements an initial valuable and visible integration, more employee lightbulbs will go off, identifying additional ideas for automation and integration for your prioritization backlog. And don’t forget about the ROI calculators in your automation tooling, as they will help objectively refine your prioritization by analyzing your planned and actual savings. Not only will your employees benefit directly from the automation, but they will also feel heard when they see their ideas come to life.



Quote for the day:

"Uncertainty is a permanent part of the leadership landscape. It never goes away." -- Andy Stanley

Daily Tech Digest - June 14, 2022

Business Architecture - A New Depiction

Crucial to this depiction are components which exist in both the vertical pillars and the horizontal Business Architecture layer as follows: Application Architecture: includes the Business Process component, to associate application components (logical & operational) with the business activity they support. Information Architecture: includes the Information Component from a business perspective separately from any logical or operational representation of that information by data (structured or unstructured). Infrastructure Architecture: contains the location component. This is to recognize that business infrastructure is linked to an organization / location either by physical installation or network access. Business Architecture consists of these business components – shared with the other domains – and, in addition, more complex views which link the architecture with the business plans. For example, an architecture view for a business capability (as defined through capability-based planning) would show how the components support that capability. The 3 vertical domains can be considered to constitute IT Architecture (for the enterprise). 


Meet Web Push

One goal of the WebKit open source project is to make it easy to deliver a modern browser engine that integrates well with any modern platform. Many web-facing features are implemented entirely within WebKit, and the maintainers of a given WebKit port do not have to do any additional work to add support on their platforms. Occasionally features require relatively deep integration with a platform. That means a WebKit port needs to write a lot of custom code inside WebKit or integrate with platform specific libraries. For example, to support the HTML <audio> and <video> elements, Apple’s port leverages Apple’s Core Media framework, whereas the GTK port uses the GStreamer project. A feature might also require deep enough customization on a per-Application basis that WebKit can’t do the work itself. For example web content might call window.alert(). In a general purpose web browser like Safari, the browser wants to control the presentation of the alert itself. But an e-book reader that displays web content might want to suppress alerts altogether. From WebKit’s perspective, supporting Web Push requires deep per-platform and per-application customization.


Introduction to Infrastructure as Code - Part 1: Introducing IaC

In recent years, development has shifted away from monolithic applications and towards microservices architectures and cloud-native applications. However, modernizing apps introduces complexity, as maintaining the cloud computing architecture requires infrastructure automation tools, efficient provisioning, and scaling of new resources. Too many developers still see infrastructure provisioning and management as an opaque process that Ops teams perform using GUI tools like the Azure Portal. Infrastructure as code (IaC) challenges that notion. The practice of IaC unifies development and operations, creating a close bond between code and infrastructure. Why should we use IaC? When you develop an application, you create code, build and version it, and deploy the artifact through the DevOps pipeline. IaC allows you to create your infrastructure in the cloud using code, enabling you to version and execute that code whenever necessary. This three-article series starts with an introduction to IaC. Then, the following two articles in this series show how to use the Bicep language and Terraform HCL syntax to create templates and automatically provision resources on Azure.


VPN providers flee Indian market ahead of new data rules

The new directive by India's top cybersecurity agency, the Indian Computer Emergency Response Team (Cert-In), requires VPN, Virtual Private Server (VPS) and cloud service providers to store customers' names, email addresses, IP addresses, know-your-customer records, and financial transactions for a period of five years. SurfShark announced on Wednesday in a post titled "Surfshark shuts down servers in India in response to data law," that it "proudly operates under a strict "no logs" policy, so such new requirements go against the core ethos of the company." SurfShark is not the first VPN provider to pull its servers from the country following the directive. ExpressVPN also decided to take the same step just last week, and NordVPN has also warned that it will be removing physical servers if the directives are not reversed. ... Like many businesses around the world, Indian companies have increased their reliance on VPNs since the COVID-19 pandemic forced many employees to work from home. VPN adoption grew to allow employees to access sensitive data remotely, even as companies started adopting other secure means to allow remote access such as Zero Trust Network Access and Smart DNS solutions.


5 top deception tools and how they ensnare attackers

To work, deception technologies essentially create decoys, traps that emulate natural systems. These systems work because of the way most attackers operate. For instance, when attackers penetrate the environment, they typically look for ways to build persistence. This typically means dropping a backdoor. In addition to the backdoor, attackers will attempt to move laterally within organizations, naturally trying to use stolen or guessed access credentials. As attackers find data and systems of value, they will deploy additional malware and exfiltrate data, typically using the backdoor(s) they dropped. With traditional anomaly detection and intrusion detection/prevention systems, enterprises try to spot these attacks in progress on their entire networks and systems. Still, the problem is these tools rely on signatures or susceptible machine learning algorithms and throw off a tremendous number of false positives. Deception technologies, however, have a higher threshold to trigger events, but these events tend to be real threat actors conducting real attacks.


MIT built a new reconfigurable AI chip that can reduce electronic waste

The team's optical communication system comprises paired photodetectors and LEDs patterned with tiny pixels. The photodetectors feature an image sensor for receiving data, and LEDs transmit that data to the next layer. Since the components must work like a LEGO-like reconfigurable AI chip, they must be compatible. "The sensory chip at the bottom receives signals from the outside environment and sends the information to the next chip above by light signals. The next chip, which is a processor layer, receives the light information and then processes the pre-programmed function. Such light-based data transfer continues to other chips above, thus performing multi-functional tasks as a whole," the team explained. ... The team fabricated a single chip with a computing core that measured about four square millimeters. The chip is stacked with three image recognition "blocks", each comprising an image sensor, optical communication layer, and artificial synapse array for classifying one of three letters, M, I, or T. They then shone a pixellated image of random letters onto the chip and measured the electrical current that each neural network array produced in response.


Augmented reality head-up displays: Navigating the next-gen driving experience

HUDs work by projecting a transparent 2D or 3D digital image of navigational and hazard warning information, for example, onto the windscreen of the vehicle. These projected images then merge with the driver's view of the road ahead. Windshield HUDs, for example, are set up so that the driver does not need to shift their gaze away from the road in order to view the relevant, timely information. This technology helps to keep the driver's attention on the road, as opposed to the driver having to look down at the dashboard or navigation system. Technological advances in this area have led to HUDs with holographic displays and AR in 3D. This added depth perception makes it possible to project computer-generated virtual objects in real time into the driver's field of view to warn, inform or entertain the user. The driver's alertness to road obstacles is increased by enabling shorter obstacle visualization times, and eye strain and driving stress levels are reduced. "Holographic HUDs are paramount if we are to explore the possibilities of augmented and mixed reality for road safety," said Jana


Nigerian Police Bust Gang Planning Cyberattacks on 10 Banks

The operation was a coordinated effort between the Economic and Financial Crimes Commission of Nigeria, Interpol, the National Central Bureaus and law enforcement agencies of 11 countries across Southeast Asia, according to Interpol. The operation was initiated after Interpol's private sector partner Trend Micro provided operational intelligence to the agency about the "emergence and usage of Agent Tesla malware" in this case. Agent Tesla was found on the mobile phones and laptops of the syndicate members that were seized by the EFCC during the bust. "Through its global police network and constant monitoring of cyberspace, Interpol had the globally sourced intelligence needed to alert Nigeria to a serious security threat where millions could have been lost without swift police action," Interpol Director of Cybercrime Craig Jones says in the statement. "Further arrests and prosecutions are foreseen across the world as intelligence continues to come in and investigations unfold." 


10 ways DevOps can help reduce technical debt

In most cases, technical debt occurs because development teams take shortcuts to meet tight deadlines and struggle with constant changes. But better collaboration between dev and ops can shorten SDLC, fasten deployments, and increase their frequency. Moreover, CI/CD and continuous testing make it easier for teams to deal with changes. Overall, the collaborative culture encourages code reviews, good coding practices, and robust testing with mutual help. ... Technical debt is best controlled when managed continuously, which becomes easier with DevOps. As it facilitates constant communication, teams can track debt, facilitate awareness and resolve it as soon as possible. Team leaders can also include technical debt review into backlog and schedule maintenance sprints to deal with it promptly. Moreover, DevOps reduces the chances of incomplete or deferred tasks in the backlog, helping prevent technical debt. ... A true DevOps culture can be the key to managing technical debt over long periods. DevOps culture encourages strong collaboration between cross-functional teams, provides autonomy and ownership, and practices continuous feedback and improvement.


Once is never enough: The need for continuous penetration testing

The traditional attitude to manual pen testing is kind of like the traditional approach to driving navigation: nothing can replace the sophistication and accrued knowledge of a human. A taxi driver will always beat Google Maps, and a trained pen testing professional will find vulnerabilities and attacks that automated tests may miss, or identify responses that appear legitimate to automated software but are actually a threat. The truth is, on a case-by-case basis, this could conceivably be true. But with off-the-shelf tools and services like RaaS (Ransomware as a Service) or MaaS (Malware as a Service) that use AI/ML capabilities to enhance attack efficiency – you’d need an army of pen testers to truly meet the challenges of today’s cyber threats. And once you’d found, trained and employed them – cyberattackers would simply increase their automation efforts and you’d need to draft another army. Not a sustainable cybersecurity model, clearly. Similarly, the widescale adoption of agile development methodologies has translated into increasingly frequent software releases.



Quote for the day:

"If you are truly a leader, you will help others to not just see themselves as they are, but also what they can become." -- David P. Schloss

Daily Tech Digest - June 13, 2022

The Increasingly Graphic Nature Of Intel Datacenter Compute

What customers are no doubt telling Intel and AMD is that they want highly tuned pieces of hardware co-designed with very precise workloads, and that they will want them at much lower volumes for each multi-motor configuration than chip makers and system builders are used to. Therefore, these compute engine complexes we call servers will carry higher unit costs than chip makers and system builders are used to, but not necessarily with higher profits. In fact, quite possibly with lower profits, if you can believe it. This is why Intel is taking a third whack at discrete GPUs with its Xe architecture and significantly with the “Ponte Vecchio” Xe HPC GPU accelerator that is at the “Aurora” supercomputer at Argonne National Laboratory. And this time the architecture of the GPUs is a superset of the integrated GPUs for its laptops and desktops, not some Frankenstein X86 architecture that is not really tuned for graphics even if it could be used as a massively parallel compute engine in a way that GPUs have been transformed from Nvidia and AMD. 


Under the hood: Meta’s cloud gaming infrastructure

Our goal within each edge computing site is to have a unified hosting environment to make sure we can run as many games as possible as smoothly as possible. Today’s games are designed for GPUs, so we partnered with NVIDIA to build a hosting environment on top of NVIDIA Ampere architecture-based GPUs. As games continue to become more graphically intensive and complex, GPUs will provide us with the high fidelity and low latency we need for loading, running, and streaming games. To run games themselves, we use Twine, our cluster management system, on top of our edge computing operating system. We build orchestration services to manage the streaming signals and use Twine to coordinate the game servers on edge. We built and used container technologies for both Windows and Android games. We have different hosting solutions for Windows and Android games, and the Windows hosting solution comes with the integration with PlayGiga. We’ve built a consolidated orchestration system to manage and run the games for both operating systems.


Google AI Introduces ‘LIMoE’

A typical Transformer comprises several “blocks,” each containing several distinct layers. A feed-forward network is one of these layers (FFN). This single FFN is replaced in LIMoE and the works described above by an expert layer with multiple parallel FFNs, each of which is an expert. A primary router predicts which experts should handle which tokens, given a series of passes to process. ... The model’s price is comparable to the regular Transformer model if only one expert is activated. LIMoE performs exactly that, activating one expert per case and matching the dense baselines’ computing cost. The LIMoE router, on the other hand, may see either image or text data tokens. When MoE models try to deliver all tokens to the same expert, they fail uniquely. Auxiliary losses, or additional training objectives, are commonly used to encourage balanced expert utilization. Google AI team discovered that dealing with numerous modalities combined with sparsity resulted in novel failure modes that conventional auxiliary losses could not solve. To address this, they created additional losses.


Stop Splitting Yourself in Half: Seek Out Work-Life Boundaries, Not Balance

What makes boundaries different from balance? Balance implies two things that aren't equal that you're constantly trying to make equal. It creates the expectation of a clear-cut division. A work-life balance fails to acknowledge that you are a whole person, and sometimes things can be out of balance without anything being wrong. Sometimes you'll spend days, weeks and even whole seasons of life choosing to lean more into one part of your life than the other. Boundaries ask you to think about what's important to you, what drives you, and what authenticity looks like for you. Boundaries require self-awareness and self-reflection, along with a willingness and ability to prioritize. Those qualities help you to be more aware and more capable of making decisions at a given moment. By establishing boundaries grounded in your priorities, you're more equipped to make choices. Boundaries empower you to say, "This is what I'm choosing right now. I need to be fully here until this is done." Boundaries aren't static, either. 


Why it’s time for 'data-centric artificial intelligence'

AI systems need both code and data, and “all that progress in algorithms means it's actually time to spend more time on the data,” Ng said at the recent EmTech Digital conference hosted by MIT Technology Review. Focusing on high-quality data that is consistently labeled would unlock the value of AI for sectors such as health care, government technology, and manufacturing, Ng said. “If I go see a health care system or manufacturing organization, frankly, I don't see widespread AI adoption anywhere.” This is due in part to the ad hoc way data has been engineered, which often relies on the luck or skills of individual data scientists, said Ng, who is also the founder and CEO of Landing AI. Data-centric AI is a new idea that is still being discussed, Ng said, including at a data-centric AI workshop he convened last December. ... Data-centric AI is a key part of the solution, Ng said, as it could provide people with the tools they need to engineer data and build a custom AI system that they need. “That seems to me, the only recipe I'm aware of, that could unlock a lot of this value of AI in other industries,” he said.


How Do We Utilize Chaos Engineering to Become Better Cloud-Native Engineers?

The main goal of Chaos Engineering is as explained here: “Chaos Engineering is the discipline of experimenting on a system in order to build confidence in the system’s capability to withstand turbulent conditions in production.” The idea of Chaos Engineering is to identify weaknesses and reduce uncertainty when building a distributed system. As I already mentioned above, building distributed systems at scale is challenging, and since such systems tend to be composed of many moving parts, leveraging Chaos Engineering practices to reduce the blast radius of such failures, proved itself as a great method for that purpose. We leverage Chaos Engineering principles to achieve other things besides its main objective. The “On-call like a king” workshops intend to achieve two goals in parallel—(1) train engineers on production failures that we had recently; (2) train engineers on cloud-native practices, tooling, and how to become better cloud-native engineers!


The 3 Phases of Infrastructure Automation

Manually provisioning and updating infrastructure multiple times a day from different sources, in various clouds or on-premises data centers, using numerous workflows is a recipe for chaos. Teams will have difficulty collaborating or even sharing a view of the organization’s infrastructure. To solve this problem, organizations must adopt an infrastructure provisioning workflow that stays consistent for any cloud, service or private data center. The workflow also needs extensibility via APIs to connect to infrastructure and developer tools within that workflow, and the visibility to view and search infrastructure across multiple providers. ... The old-school, ticket-based approach to infrastructure provisioning makes IT into a gatekeeper, where they act as governors of the infrastructure but also create bottlenecks and limit developer productivity. But allowing anyone to provision infrastructure without checks or tracking can leave the organization vulnerable to security risks, non-compliance and expensive operational inefficiencies.


Questioning the ethics of computer chips that use lab-grown human neurons

While silicon computers transformed society, they are still outmatched by the brains of most animals. For example, a cat’s brain contains 1,000 times more data storage than an average iPad and can use this information a million times faster. The human brain, with its trillion neural connections, is capable of making 15 quintillion operations per second. This can only be matched today by massive supercomputers using vast amounts of energy. The human brain only uses about 20 watts of energy, or about the same as it takes to power a lightbulb. It would take 34 coal-powered plants generating 500 megawatts per hour to store the same amount of data contained in one human brain in modern data storage centres. Companies do not need brain tissue samples from donors, but can simply grow the neurons they need in the lab from ordinary skin cells using stem cell technologies. Scientists can engineer cells from blood samples or skin biopsies into a type of stem cell that can then become any cell type in the human body.


How Digital Twins & Data Analytics Power Sustainability

Seeding technology innovation across an enterprise requires broader and deeper communication and collaboration than in the past, says Aapo Markkanen, an analyst in the technology and service providers research unit at Gartner. “There’s a need to innovate and iterate faster, and in a more dynamic way. Technology must enable processes such as improved materials science and informatics and simulations.” Digital twins are typically at the center of the equation, says Mark Borao, a partner at PwC. Various groups, such as R&D and operations, must have systems in place that allow teams to analyze diverse raw materials, manufacturing processes, and recycling and disposal options --and understand how different factors are likely to play out over time -- and before an organization “commits time, money and other resources to a project,” he says. These systems “bring together data and intelligence at a massive scale to create virtual mirrored worlds of products and processes,” Podder adds. In fact, they deliver visibility beyond Scope 1 and Scope 2 emissions, and into Scope 3 emissions.


API security warrants its own specific solution

If the API doesn’t apply sufficient internal rate limiting on parameters such as response timeouts, memory, payload size, number of processes, records and requests, attackers can send multiple API requests creating a denial of service (DoS) attack. This then overwhelms back-end systems, crashing the application or driving resource costs up. Prevention requires API resource consumption limits to be set. This means setting thresholds for the number of API calls and client notifications such as resets and lockouts. Server-side, validate the size of the response in terms of the number of records and resource consumption tolerances. Finally, define and enforce the maximum size of data the API will support on all incoming parameters and payloads using metrics such as the length of strings and number of array elements. Effectively a different spin on BOLA, this sees the attacker able to send requests to functions that they are not permitted to access. It’s effectively an escalation of privilege because access permissions are not enforced or segregated, enabling the attacker to impersonate admin, helpdesk, or a superuser and to carry out commands or access sensitive functions, paving the way for data exfiltration.



Quote for the day:

"To make a decision, all you need is authority. To make a good decision, you also need knowledge, experience, and insight." -- Denise Moreland

Daily Tech Digest - June 11, 2022

Cloud computing security: Where it is, where it's going

Most businesses use multiple cloud services and cloud providers, a hybrid approach that can support granular security options where vital data is kept close (perhaps in a private cloud) while less sensitive applications run in a public cloud to take advantage of big tech's economies of scale. But the hybrid model also introduces new complications, as every provider will have a slightly different set of security models that cloud customers will need to understand and manage. That takes time and (often elusive) expertise. But misconfigured services are high on the list of the causes for security incidents, along with even more basic failures like poor passwords and identity controls. Little surprise that companies are evaluating tools to automate much of this. That's leading to interest in new technologies such as Cloud Security Posture Management (CSPM) tools, which can help security teams spot and fix potential security issues around misconfiguration and compliance in the cloud, so they know the same rules are being enforced across their cloud services.


Jump Into the DevOps Pool: The Water Is Fine

If you’re thinking that becoming a member of a DevOps team sounds interesting, what are the things you need to consider? Having experience in just about any aspect of IT gives you the technical foundation to make yourself a viable candidate. Do some research. What does it take to hone your existing skills to become a successful member of a DevOps team? You’ll likely find that it takes you in a direction well within your reach. Your technical skills are just the beginning though. Your skills will contribute to the broader objective of the DevOps team. Valuable DevOps team members understand how their role fits into the bigger picture. It’s not necessary to know the details of another team member’s discipline. It is, however, important to understand how each of your roles contributes to the DevOps process. This implies that you take some time to learn about each role’s function. Becoming an invaluable DevOps team member goes one step further. DevOps engineers who possess or develop the interpersonal skills to work beyond their team in guiding others, become key players within an organization. 


How to prioritize cloud spending: 5 strategies for architects

The price of spot instances changes over days and weeks, so you can't predict the cost at the time of purchase. The amount of money saved varies depending on the type of resource: Low-priority instances are the least expensive, but they may be unavailable or turn off abruptly depending on capacity demand in the region. But such cases are rare. For example, AWS states that the average interruption frequency across all regions and instance types doesn't exceed 10%. Spot instances are best for stateless workloads, batch operations, and other fault-tolerant or time-flexible tasks. ... Begin by examining your cloud provider's transfer fees. Then, find ways to limit the number of data transfers in your cloud architecture. For example, you may need to change your application behavior and architecture to use computing resources in the closest data location. Transfer on-premises apps that often access cloud-hosted data to the cloud. In contrast to the cloud, specific resources (such as network bandwidth) are considered free in traditional datacenters. So if you move applications from on-premises datacenters, modify your application architecture to limit the amount of data transferred.


Defensive Cyber Attacks Declared Legal by UK AG

The move highlights a general lack of international agreement about when defensive cyber attacks should be considered appropriate. There has long been a murky world of online espionage in which countries have tacitly agreed to not respond with military force, due in no small part to degrees of plausible deniability and a great difficulty in displaying concrete evidence to the public that another nation’s covert hacking teams were behind a virtual break-in. This unofficial understanding has survived in the internet age, even as allies have been caught spying on each other, so long as everyone refrained from using cyber attacks to cause physical damage. Some developments in recent years have strained that arrangement, including Russia’s repeated cyber attacks on services in Ukraine and the recent willingness of cyber criminals to hit foreign critical infrastructure and government agencies with ransomware attacks. The UK AG has expressed that there is a pressing need to establish formal rules regarding defensive cyber attacks given the demonstrated possibility of devastating incidents that could cause actual damage to civilians, and that existing non-intervention agreements could serve as a launch point.


How AI can give companies a DEI boost

Although many companies are experimenting with AI as a tool to assess DEI in these areas, Greenstein noted, they aren’t fully delegating those processes to AI, but rather are augmenting them with AI. Part of the reason for their caution is that in the past, AI often did more harm than good in terms of DEI in the workplace, as biased algorithms discriminated against women and non-white job candidates. “There has been a lot of news about the impact of bias in the algorithms looking to identify talent,” Greenstein said. For example, in 2018, Amazon was forced to scrap its secret AI recruiting tool after the tech giant realized it was biased against women. And a 2019 study conducted by Harvard Business Review concluded that AI-enabled recruiting algorithms introduced anti-Black bias into the process. AI bias is caused, often unconsciously, by the people who design AI models and interpret the results. If an AI is trained on biased data, it will, in turn, make biased decisions. For instance, if a company has hired mostly white, male software engineers with degrees from certain universities in the past, a recruiting algorithm might favor job candidates with similar profiles for open engineering positions.


A CFO’s perspective on sustainable, inclusive growth

We’ve faced an ongoing health crisis that turned into a social crisis that went to an economic crisis and, unfortunately, we’re facing humanitarian crises, such as the war in Ukraine. But the fact of the matter is, people are making decisions, different decisions than where we were three to five years ago. And I believe they’re challenging the purpose of organizations, businesses, and leadership. As we talk about sustainability and inclusivity with that combination of the foundation for growth, that’s what the priorities of people are today. You asked about today’s CFOs and sustainability, inclusivity, growth. I truly believe that history will be written about these times that we’ve been operating in. As CFOs, we’re always—Eric, as you know quite well—focused on the what: productivity, efficiency, operational stability, liquidity. But I think these times will be less about pure financials and more about a culture. And when I think about culture, IBM—let me give a little shout out to my company—has a framework. We’ve been in existence for 111 years. We have a framework around culture that’s really grounded in purpose, united in values, and demonstrated through growth behaviors. 


Container adoption: 5 expert tips

“If you want to move beyond containers as a tool for developers and put them into production, that means you’ll also be adopting an orchestration layer like Kubernetes and the various monitoring, CI/CD, logging, and tracing tools that go with it,” Haff says. “Which is exactly what many organizations are doing.” Containers and Kubernetes tend to go hand-in-hand because without that orchestration layer, teams otherwise find that managing containers at any kind of scale in production requires untenable effort. Haff notes that 70 percent of IT leaders surveyed in the State of Enterprise Open Source 2022 report said their organizations were using Kubernetes. Speaking of open source, containerization has open source DNA – and adoption often leads to uptake of other open source technologies, too. Make sure you’re using up-to-date, reliable, and secure code. “Containerization leads to more use of open source and other public components,” Korren says. “There are a lot of useful, well-maintained code components on the Internet, but there are many that are not.”


Create End-To-End Integration Of Tools & Data For Flow Insight & Traceability

Without a long-term strategy or clearly assigned data-custody across the digital product lifecycle, data access and management is fragmented between process owners, application owners, or development teams, becoming more unstable with every company re-organization or staff departure. Many organizations reluctantly determine that data islands, duplicate data stores, and conflicting data are inevitable. The chain reaction of resulting issues is both overwhelming and costly. It may not be possible to do a meaningful root cause analysis to resolve incidents, assess the efficiency of digital product delivery, assess the value compared with cost, or receive valuable feedback from development before deployment. Design flaws are repeated, and incorrect processes are unintentionally reinforced. The lack of end-to-end visibility results in a slow response time to development, change, and incident tickets because there is no traceability or data integrity for tracking down the root cause of problems. Add that when data ownership is transferred or unclear, frustrated teams may dodge responsibility and throw issues “over the fence” to other stakeholders through the course of the digital product’s lifecycle.


Using Behavioral Analytics to Bolster Security

Josh Martin, product evangelist at security firm Cyolo, explains that behavioral analytics would not be possible without ML and AI. “The data collected from the detection phase will be fed into multiple AI and ML models that will allow for deeper inspection of access habits to detect patterns or outliers for specific users,” he says. He outlines a potential use case for behavioral analytics and zero trust focused on a team member working from home. This user logs in every day from their corporate Mac around 8:00 in the morning and will either log into Salesforce or O365 first thing. “Considering this is normal for the user, the AI/ML mechanisms will start to look for anything outside of this baseline,” Martin says. “So, when the user takes a vacation to a different state and uses a personal Windows laptop to access ADP around 10 o’clock at night, this would raise a flag and shut down further authentication attempts until a security analyst can investigate. In this case, it could have been a malicious entity using stolen credentials to access payroll information.” From his perspective, behavioral analytics is likely to become the new norm as AI/ML products and knowledge become more accessible to the masses.


Rekindling the thrill of programming

We could say that programming is an activity that moves between the mental and the physical. We could even say it is a way to interact with the logical nature of reality. The programmer blithely skips across the mind-body divide that has so confounded thinkers. “This admitted, we may propose to execute, by means of machinery, the mechanical branch of these labours, reserving for pure intellect that which depends on the reasoning faculties.” So said Charles Babbage, originator of the concept of a digital programmable computer. Babbage was conceiving of computing in the 1800s. Babbage and his collaborator Lovelace were conceiving not of a new work, but a new medium entirely. They wrangled out of the ether a physical ground for our ideations, a way to put them to concrete test and make them available in that form to other people for consideration and elaboration. In my own life of studying philosophy, I discovered the discontent of thought form whose rubber never meets the road. In this vein, Mr. Brooks completes his thought above when he writes, “Yet the program construct, unlike the poet’s words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself.”



Quote for the day:

"Great Groups need to know that the person at the top will fight like a tiger for them." -- Warren G. Bennis