Daily Tech Digest - August 20, 2021

Identity security: a more assertive approach in the new digital world

Perimeter-based security, where organisations only allow trusted parties with the right privileges to enter and leave doesn’t suit the modern digitalised, distributed environment of remote work and cloud applications. It’s just not possible to put a wall around a business that’s spread across multiple private and public clouds and on-premises locations. This has led to the emergence of approaches like Zero-Trust – an approach built on the idea that organisations should not automatically trust anyone or anything – and the growth of identity security as a discipline, which incorporates Zero-Trust principles at the scale and complexity required by modern digital business. Zero-Trust frameworks demand that anyone trying to access an organisation’s system is verified every time before granting access on a ‘least privilege’ basis, which is particularly useful in the context of the growing need to audit machine identities. Typically, they operate by collecting information about the user, endpoint, application, server, policies and all activities related to them and feeding it into a data pool which fuels machine learning (ML).


How Can We Make It Easier To Implant a Brain-Computer Interface?

As for implantable BCIs, so far there is only the Blackrock NeuroPort Array (Utah Array) implant, which also has the largest number of subjects implanted and the longest documented implantation times, and the Stentrode from Synchron, that has just recorded its first two implanted patients. The latter is essentially based on a stent that is inserted into the blood vessels in the brain and used to record EEG-type data (local field potentials (LFPs)). It is a very clever solution and surgical approach, and I do believe that it has great potential for a subset of use cases that do not require the high level of spatial and temporal resolution that our electrodes are offering. I am also looking forward to seeing the device’s long term performance. Our device records single unit action potentials (i.e., signals from individual neurons) and LFPs with high temporal and spatial resolution and high channel count, allowing significant spatial coverage of the neural tissue. It is implanted by a neurosurgeon who creates a small craniotomy (i.e., opens a small hole in the skull and dura), inserts the devices in the previously determined location by manually placing it in the correct area.


Artificial Intelligence (AI): 4 characteristics of successful teams

In most instances, AI pilot programs show promising results but then fail to scale. Accenture surveys point to 84 percent of C-suite executives acknowledging that scaling AI is important for future growth, but a whopping 76 percent also admit that they are struggling to do so. The only way to realize the full potential of AI is by scaling it across the enterprise. Unfortunately, some AI teams think only in terms of executing a workable prototype to establish proof-of-concept, or at best transform a department or function. Teams that think enterprise-scale at the design stage can go successfully from pilot to enterprise-scale production. They often build and work on ML-Ops platforms to standardize the ML lifecycle and build a factory line for data preparation, cataloguing, model management, AI assurance, and more. AI technologies demand huge compute and storage capacities, which often only large, sophisticated organizations can afford. Because resources are limited, AI access is privileged in most companies. This compromises performance because fewer minds mean fewer ideas, fewer identified problems, and fewer innovations.


Software Testing in the World of Next-Gen Technologies

If there is a technology that has gained momentum during the past decade, it is nothing other than artificial intelligence. AI offers the potential to mimic human tasks and improvise the operations through its own intellect, the logic it brings to business shows scope for productive inferences. However, the benefit of AI can only be achieved by feeding computers with data sets, and this needs the right QA and testing practices. As long as automation testing implementation needs to be done for deriving results, performance could only be achieved by using the right input data leading to effective processing. Moreover, the improvement of AI solutions is beneficial not only for other industries, but QA itself, since many of the testing and quality assurance processes depend on automation technology powered by artificial intelligence. The introduction of artificial intelligence into the testing process has the potential to enable smarter testing. So, the testing of AI solutions could enable software technologies to work on better reasoning and problem-solving capabilities.


What Makes Agile Transformations Successful? Results From A Scientific Study

The ultimate test of any model is to test it with every Scrum team and every organization. Since this is not practically feasible, scientists use advanced statistical techniques to draw conclusions about the population from a smaller sample of data from that population. Two things are important here. The first is that the sample must be big enough to reliably distinguish effects from the noise that always exists in data. The second is that the sample must be representative enough of the larger population in order to generalize findings to it. It is easy to understand why. Suppose that you’re tasked with testing the purity of the water in a lake. You can’t feasibly check every drop of water for contaminants. But you can sample some of the water and test it. This sample has to be big enough to detect contaminants and small enough to remain feasible. It's also possible that contaminants are not equally distributed across the lake. So it's a good idea to sample and test a bucket of water at various spots from the lake. This is effectively what happens here.


OAuth 2.0 and OIDC Fundamentals for Authentication and Authorization

The main goal of OAuth 2.0 is delegated authorization. In other words, as we saw earlier, the primary purpose of OAuth 2.0 is to grant an app access to data owned by another app. OAuth 2.0 does not focus on authentication, and as such, any authentication implementation using OAuth 2.0 is non-standard. That’s where OpenID Connect (OIDC) comes in. OIDC adds a standards-based authentication layer on top of OAuth 2.0. The Authorization Server in the OAuth 2.0 flows now assumes the role of Identity Server (or OIDC Provider). The underlying protocol is almost identical to OAuth 2.0 except that the Identity Server delivers an Identity Token (ID Token) to the requesting app. The Identity Token is a standard way of encoding the claims about the authentication of the user. We will talk more about identity tokens later. ... For both these flows, the app/client must be registered with the Authorization Server. The registration process results in the generation of a client_idand a client_secret which must then be configured on the app/client requesting authentication.


How Biometric Solutions Are Shaping Workplace Security

Today, the corporate world and biometric technology go hand in hand. Companies cannot operate seamlessly without biometrics. Regular security checks just don’t cut it in companies anymore. Since biometric technologies are designed specifically to offer the highest level of security, there is limited to no room when it comes to defrauding these systems. Thus, technologies like ID Document Capture, Selfie Capture, 3D Face Map Creation, etc., are becoming the best way to secure the workplace. Biometric technology allows for specific data collection. It doesn’t just reduce the risk of a data breach but also protects important data in offices. Whether it’s cards, passwords, documents, etc., biometric technology eliminates the need for such hackable security implementations at the workplace. All biometric data like fingerprints, facial mapping, and so on are extremely difficult to replicate. Certain biological characteristics don’t change with time, and that prevents authentication errors. Hence, there’s limited scope for identity replication or mimicry. Customized personal identity access control has become an employee’s right of sorts. 


How to avoid being left behind in today’s fast-paced marketplace

The ability to speed up processes and respond more quickly to a highly dynamic market is the key to survival in today’s competitive business environment. For many large businesses, the ERP system forms a crucial part of the digital core, which is supplemented by best-of-breed applications in areas such as customer experience, supply chain, and asset management. When it comes to digitalisation, organisations will often focus on these applications and the connections between them. However, we often see businesses forget to automate processes in the digital core itself — an oversight that can negatively impact other digitalisation efforts. For example, the ability to analyse demand trends on social media in the customer-focused application can offer valuable insights, but if it takes months for the product data needed to launch a new product variant to be accessed, customer trends are likely to have already moved on. If we look more closely at the process of launching a new product to market, this is a prime example of where digital transformation can be applied to help manufacturers remain agile and respond to market trends more quickly. 


FireEye, CISA Warn of Critical IoT Device Vulnerability

Kalay is a network protocol that helps devices easily connect to a software application. In most cases, the protocol is implemented in IoT devices through a software development kit that's typically installed by original equipment manufacturers. That makes tracking devices that use the protocol difficult, the FireEye researchers note. The Kalay protocol is used in a variety of enterprise IoT and connected devices, including security cameras, but also dozens of consumer devices, such as "smart" baby monitors and DVRs, the FireEye report states. "Because the Kalay platform is intended to be used transparently and is bundled as part of the OEM manufacturing process, [FireEye] Mandiant was not able to create a complete list of affected devices and geographic regions," says Dillon Franke, one of the three FireEye researcher who conducted the research on the vulnerability. FireEye's Mandiant Red Team first uncovered the vulnerability in 2020. If exploited, the flaw can allow an attacker to remotely control a vulnerable device, "resulting in the ability to listen to live audio, watch real-time video data and compromise device credentials for further attacks based on exposed device functionality," the security firm reports.


An Introduction to Blockchain

The distributed ledger created using blockchain technology is unlike a traditional network, because it does not have a central authority common in a traditional network structure. Decision-making power usually resides with a central authority, who decides in all aspects of the environment. Access to the network and data is subject to the individual responsible for the environment. The traditional database structure therefore is controlled by power. This is not to say that a traditional network structure is not effective. Certain business functions may best be managed by a central authority. However, such a network structure is not without its challenges. Transactions take time to process and cost money; they are not validated by all parties due to limited network participation, and they are prone to error and vulnerable to hacking. To process transactions in a traditional network structure also requires technical skills. In contrast, the distributed ledger is control by rules, not a central authority. The database is accessible to all the members of the network and installed on all the computers that use the database. Consensus between members is required to add transactions to the database.



Quote for the day:

"Nothing is less productive than to make more efficient what should not be done at all." -- Peter Drucker

Daily Tech Digest - August 19, 2021

XSS Bug in SEOPress WordPress Plugin Allows Site Takeover

“The permissions_callback for the endpoint only verified if the user had a valid REST-API nonce in the request,” according to the posting. “A valid REST-API nonce can be generated by any authenticated user using the rest-nonce WordPress core AJAX action.” Depending on what an attacker updates the title and description to, it would allow a number of malicious actions, up to and including full site takeover, researchers said. “The payload could include malicious web scripts, like JavaScript, due to a lack of sanitization or escaping on the stored parameters,” they wrote. “These web scripts would then execute any time a user accessed the ‘All Posts’ page. As always, cross-site scripting vulnerabilities such as this one can lead to a variety of malicious actions like new administrative account creation, webshell injection, arbitrary redirects and more. This vulnerability could easily be used by an attacker to take over a WordPress site.” To protect their websites, users should upgrade to version 5.0.4 of SEOPress. Vulnerabilities in WordPress plugins remain fairly common. 


How building a world class SOC can alleviate security team burnout

In the short term, this alert overload means an increased potential for high-risk threats being missed as analysts attempt to slog through as many alerts as possible alongside their other duties. Aside from the immediate security issues, this kind of environment poses some serious long-term problems. The frustrations of burnt-out teams can build to the point where analysts will decide to quit their job in search of less stressful positions. We have found that around half of security personnel are considering changing roles at any given time. Not only will they be taking their experience and skills with them, but the ongoing cyber shortage means finding a replacement may be a long and costly process. A team that spends most of its time trudging through alerts and running to put out security fires will also have very little time left for any higher-level strategic activity. This might include undertaking in-depth risk analysis and establishing improved security strategies and processes. Without this activity, the organization will struggle to keep up with evolving cyber threats.


Security through obscurity no longer works

You might expect that companies would be better off keeping their cards close to their chest. The less hackers know about how a company guards its data, the safer the data becomes, according to this line of thinking. In fact, the opposite is true. Secrecy in cyber security puts everyone at risk: the company, its customers, and its suppliers. Electric vehicles serve as a good example of the value of openness in cyber security. Many models require extremely sophisticated software that has to be updated frequently. For example, Tesla distributes updates to owners at least once per month. To deliver updates, an electric car maker requires worldwide access privileges to the on-board computers on its cars. Naturally, car owners want certainty that this does not expose them to hacking, remote carjackings and shut downs, or being spied on as they drive. For this reason, makers of electric vehicles need to be extremely open about their cyber security so that owners, or trusted experts, can assess if the company’s systems offer effective protection. Although they do not themselves manage data, telecom equipment makers take their responsibility in supplying network operators just as seriously as makers of electric cars.


Container Best Practices: What They Are and Why You Should Care

One of the common pitfalls organizations make is to succumb in practice to the misperception that minification of containers IS container best practices. Without a doubt, an outsized amount of time and energy is spent thinking about reducing the size of a container image (minification), and with good reason. Smaller images are safer; faster to push, pull, and scan; and just generally less cumbersome in the development lifecycle. That’s why “shrinking a container” has become a common subject for blog posts, video tutorials and Twitter posts. It’s also why the DockerSlim open source project, created and maintained by Kyle Quest, is so popular. It is best known for its ability to automatically create a functionally equivalent but smaller container. Another common tactic for container minification could be described as “The Tale of Two Containers.” In this approach, developers first create a “dev container” comprising all the tools they love to use for development. Then, once development is complete, developers convert their “dev containers” to “prod containers,” typically by replacing the “heavy” underlying base image with something lighter and more secure.


What is Today´s Relevance of an Enterprise Architecture Practice?

It seems that, especially in modern tech companies, the importance of the Enterprise Architecture (EA) practice is decreasing. Some organizations might even consider it an irrelevant practice. In the following, we analyze where such opinions emerge from. In the later parts of this series, we will provide arguments against that reasoning and provide an analysis, which underpins that this is not the end of Enterprise Architecture as a practice. However, Enterprise Architecture will go through a transformation towards an adapted set of activities, new priorities, and new required skills. ... Apart from the arguments above, there is an additional observation, which is common across many different organizations: The more old-world / legacy IT an organization has, the more important the Enterprise Architects in the organization are. Similarly, in organizations with old and new world IT, Enterprise Architects are responsible for managing the architecture of the old world. However, they have only little influence on the development of the new world IT; the digital area. 


How computer vision works — and why it’s plagued by bias

Like machine learning overall, computer vision dates back to the 1950s. Without our current computing power and data access, the technique was originally very manual and prone to error. But it did still resemble computer vision as we know it today; the effectiveness of first processing according to basic properties like lines or edges, for example, was discovered in 1959. That same year also saw the invention of a technology that made it possible to transform images into grids of numbers , which incorporated the binary language machines could understand into images. Throughout the next few decades, more technical breakthroughs helped pave the way for computer vision. First, there was the development of computer scanning technology, which for the first time enabled computers to digitize images. Then came the ability to turn two-dimensional images into three-dimensional forms. Object recognition technology that could recognize text arrived in 1974, and by 1982, computer vision really started to take shape. In that same year, one researcher further developed the processing hierarchy, just as another developed an early neural network.


John Oliver on ransomware attacks: ‘It’s in everyone’s interest to get this under control’

Most ominously, ransomware attacks now threaten numerous internet-connected, “smart” in-home devices, such as thermostats, TVs, ovens or even internet-enabled sex toys, such as a butt plug. Which prompted Oliver to remind his audience “arseholes are like opinions – letting the internet be in charge of yours is a really bad idea”. Oliver was legally obligated to say that the butt plug comes with a physical key for emergencies, “which I’m not sure is completely reassuring – keys do get lost, don’t they? Just picture the last time you searched for keys around your house and now raise the stakes significantly.” The point, he continued, was that the costs of ransomware keep raising, as the barrier to entry keeps lowering. The explosion in attacks derives from three main factors. First, ransomware as a service, as in hacking programs sold a la carte, precluding technical know-how. “Ideally, no one would launch ransomware attacks,” said Oliver, “but my next preference would be that launching one should require significantly more work than simply clicking ‘add ransomware to cart.’”


IoT could drive adoption of near-premises computing

Strategically, it's not a major leap to consider near-premises data centers that are hybrid, on premises or cloud-based. However, there are always issues such as figuring out how to redeploy when you have budget constraints and also existing resources that must stay working, CIOs and infrastructure architects must also find time to reconstruct IT infrastructure for near-premises computing. Crawford said that enterprises adopting near-premises computing can reduce their compute and storage infrastructure TCO by 30% to 50% and eliminate most or all of the capital costs they would typically need to spend on the data center itself; and that these gains can be further compounded by turning capital expenses into operating expenses through new scalable service models. If CIOs can demonstrate these gains in the cost models that they prepare for IT budgets, near-premises computing may indeed become a new implementation strategy at the edge. Don't overlook the resilience that near-premises computing brings. "The performance of near-premises computing rivals that of on-premises computing but also has the capability to add significantly more resilience," Crawford said.


Enterprise Architecture for Digital Business: Integrated Transformation Strategies

In order to move forward with the DT journeys in this new horizon of post-pandemic era, practitioners must consider a broader perspective of EA. They must review the impacts as well as synergies of innovation, disruption, and collaboration with their transformation initiatives. An innovation is not just a new way of developing and deploying business solutions – it is also to deliver tangible business outcomes to customers proactively and consistently. Disruption often leverages innovation to accentuate the changes in a business using emerging technology trends. Collaboration harnesses the power of innovation and disruption to enable practitioners work together and achieve quantifiable business results. It is evident that in this near-post-pandemic era, a new horizon of the business world is evolving. The practitioners must endure rapid changes through the use of digital transformation while leveraging a nimble, flexible, and agile enterprise architecture framework that embraces the essence of innovation, disruption, and collaboration efficiently.


What it means to be a Human leader

Listening should be an everyday task. Leaders discover what is on their staff’s mind only by listening, whether that is a set-piece exercise or on an ongoing basis. Charlie Jacobs, the senior partner at London-based law firm Linklaters since 2016, tries to do this by putting himself in places where he can have informal conversations. Back when business travel was commonplace, whenever he arrived in one of Linklaters’ 30 offices around the world, he headed to the gym, not the boardroom, to find out what was going on. Jacobs was no fan of after-hours drinks and preferred a pre-work spinning class that allowed him to mingle with colleagues from all levels while working up a sweat. “I get a different cross-section of people coming, we get a shake or a fruit juice afterwards, and they can see a more down-to-earth side to the senior partner,” he told me. ... Human leaders are focused on making the best use of their time and keeping organizations focused on their mission. They act as executive sponsors to pluck ideas from within their organization and ensure that promising projects make headway.



Quote for the day:

"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik

Daily Tech Digest - August 18, 2021

True Success in Process Automation Requires Microservices

To future-proof early investments in RPA, organizations need to implement an orchestration layer separate from their bot layer. Many RPA implementations lack a “driving” capability that can connect one process to another. In the insurance example above, one bot that inputs a claim can connect to another that inputs data into the modern CRM system (and so on until the claims process is completed). To take that modernization a step further, development teams can focus on replacing RPA bots one by one with applications built on a microservices architecture. The idea of ripping and replacing legacy systems is expensive and daunting for most organizations. In reality, gradual digital transformation makes more sense. RPA bots can help enable this transformation by keeping legacy systems functional while developers re-architect and modernize business applications in order of priority. Think of it as switching a house over to LED bulbs one by one — you can still keep the lights on in the rest of the house as each bulb gets updated.


Ransomware recovery: 8 steps to successfully restore from backup

"In many cases, enterprises don't have the storage space or capabilities to keep backups for a lengthy period of time," says Palatt. "In one case, our client had three days of backups. Two were overwritten, but the third day was still viable." If the ransomware had hit over, say, a long holiday weekend, then all three days of backups could have been destroyed. "All of a sudden you come in and all your iterations have been overwritten because we only have three, or four, or five days." ... In addition to keeping the backup files themselves safe from attackers, companies should also ensure that their data catalogs are safe. "Most of the sophisticated ransomware attacks target the backup catalog and not the actual backup media, the backup tapes or disks, as most people think," says Amr Ahmed, EY America's infrastructure and service resiliency leader. This catalog contains all the metadata for the backups, the index, the bar codes of the tapes, the full paths to data content on disks, and so on. "Your backup media will be unusable without the catalog," Ahmed says. 


LockBit 2.0 Ransomware Proliferates Globally

“Once in the domain controller, the ransomware creates new group policies and sends them to every device on the network,” Trend Micro researchers explained. “These policies disable Windows Defender, and distribute and execute the ransomware binary to each Windows machine.” This main ransomware module goes on to append the “.lockbit” suffix to every encrypted file. Then, it drops a ransom note into every encrypted directory threatening double extortion; i.e., the note warns victims that files are encrypted and may be publicly published if they don’t pay up. ... Trend Micro has been tracking LockBit over time, and noted that its operators initially worked with the Maze ransomware group, which shut down last October. Maze was a pioneer in the double-extortion tactic, first emerging in November 2019. It went on to make waves with big strikes such as the one against Cognizant. In summer 2020, it formed a cybercrime “cartel” – joining forces with various ransomware strains (including Egregor) and sharing code, ideas and resources.


Mandiant Discloses Critical Vulnerability Affecting Millions of IoT Devices

Over the course of several months, the researchers developed a fully functional implementation of ThroughTek’s Kalay protocol, which enabled the team to perform key actions on the network, including device discovery, device registration, remote client connections, authentication, and most importantly, process audio and video (“AV”) data. Equally as important as processing AV data, the Kalay protocol also implements remote procedure call (“RPC”) functionality. This varies from device to device but typically is used for device telemetry, firmware updates, and device control. Having written a flexible interface for creating and manipulating Kalay requests and responses, Mandiant researchers focused on identifying logic and flow vulnerabilities in the Kalay protocol. The vulnerability discussed in this post affects how Kalay-enabled devices access and join the Kalay network. The researchers determined that the device registration process requires only the device’s 20-byte uniquely assigned identifier (called a “UID” here) to access the network.


CQRS in Java Microservices

Command Query Responsibility Separation (CQRS) is a pattern in service architecture. It is a separation of concerns, that is, separation of services that write from services that read. Why would you want to separate read and write services? One of the advantages of microservices is the ability to scale services independently. We can often say with some level of certainty that one set of services will be busier than others. If they are separate, they can be scaled to best fit the normal use case and conserve cloud cycles. I will be looking into CQRS provided by the Axon library. Axon implements CQRS with event sourcing. The idea behind event sourcing is that your commands are executed by sending events to all subscribers. Instead of storing state in your persistence store, you store the immutable events, so you always have a record of the events that led up to a particular state. Inside your program, you will have an aggregate, which represents a stateful object but is ephemeral in that the system can bring it in and out of existence as needed.


AIOps Strategies for Augmenting Your IT Operations

Enrichment is the unsung hero of the entire event correlation process. Raw alarm data is a start, but it’s not sufficient to be able to pinpoint the root cause and enable an effective fix. When you have alerts coming in from a variety of domains, it can be difficult to correlate them to produce a fine-tuned set of tickets. You can use timestamps or point of origin, but that will provide limited insight, and you'll miss connections between related alerts coming from other sources or from other time windows. Easy-to-deploy alert enrichments add value to every single alert, providing the extra layer of understanding needed to determine which alerts are interrelated, and in what way, enable you to focus on high-level correlated incidents, instead of following every low-level alert that comes in the AIOps platform. Done right, this process of enrichment reduces the ‘noise’, and helps you bring in topology information from your CMDB, APM, and orchestration tools, change information from your change management and CI/CD pipelines, and business context from your team’s knowledge and procedures.


Addressing the demand for global software developer talent

Clear upskilling career paths should be provided for new and experienced software developers. Younger developers will expect rapid career advances — show them fast and more attractive ways forward, such as more opportunities to work on innovation projects and technologies or earn a new job title or salary due to learning a new skill. Experienced developers may want more time to explore new technologies, some freedom to decide what to work on next, or just shore up what they have been working on for years. A mentoring programme connecting graduates with more experienced developers is a good idea. However, it may add an onerous workload. Supplement that ‘human’ support with tools that, for instance, help monitor code quality, engendering a consistent coding practice level and preventing the number of errors that escape into production. Be flexible with everyone’s working hours, location, and choice of tools. Give them superior quality hardware and other workplace products to make their jobs as easier. Online training and permission to spend work time on it are essential.


IT Leadership: 11 Future of Work Traits

“Three- to five-year plans got smashed into a single year plan,” says Sarah Pope, VP of Future of Technology, global consulting company Capgemini. “Two priorities that became obvious as a result of COVID are customer experience and employee experience. Customer experience didn't have to be just 'good,' it needed to reflect customers' new behaviors and patterns. Similarly, employee experience wasn't just about technology enablement and corporate culture, but about how work fits into digital lives.” Enterprises have been pushing to reopen their offices and business leaders are well aware that not everyone will want to return. While there's a general acknowledgement that hybrid workplaces will be the norm going forward, few organizations know what that will really look like. However, it's obvious that if some people refuse to return to the office at all, and others only want to work in the office a couple of days per week, businesses need to make smart use of space, people, and time. ... “Secondarily, [they'll want to bring people together in a physical environment] from a maintaining culture and community perspective, [such as hosting] those dinners or workshops that can tack on to a team event.”


5 things to know about pay-per-use hardware

When enterprise IT teams get a quote for consumption-based infrastructure, many will find themselves in unfamiliar territory, having never evaluated this kind of pricing scheme. “It’s easy for HP or Dell to come in and say how much they’re going to charge you per core, but then you realize you have no idea whether that price is fair. That’s not how you calculate things in your own facilities, and it’s apples to oranges versus public cloud costs,” Bowers said. “As soon as enterprises are given a quote, they tend to go into spreadsheet hell for three months, trying to figure out whether that quote is fair. So it can take three, four, five months to negotiate a first deal.” Enterprises struggle to evaluate consumption-based proposals, and they lack confidence in their usage forecasts, Bowers said. “It takes a lot of financial acumen to adopt one of these programs.” Experience can help. “The companies that make the most confident decisions are those that did a lot of leasing in the past. Not because this is a lease, but because those companies have the mental muscles to be able to evaluate the financial aspects of time, value, variable payments, and risks of payment spreads,” Bowers said.


An upbeat outlook for UK IoT sector despite barriers

The concern within the UK IoT sector reflects the fact that permanent roaming – as the typical solution to delivering multi-region IoT projects – remains fraught with problems. These range from the inability of roaming agreements to support device Power Saving Modes, to the frequently arising commercial disputes, the performance issues caused by having to backhaul data, and the fact several countries have placed a complete ban on permanent roaming. In contrast to the US environment, where two dominant operators (AT&T and Verizon) deliver the majority of coverage, the European environment is far more fragmented with multiple operators delivering regional coverage. This adds a considerable layer of complexity and commercial disputes can threaten the viability of multi-region rollouts, creating a concerning degree of risk to IoT projects. UK IoT professionals are very aware that this issue of cellular connectivity must be resolved to ensure the viability of future large-scale projects – eight out of ten agree or strongly agree that the evolution of intelligent connectivity is going to be critical to continue to fuel adoption of IoT.



Quote for the day:

"Good leaders must first become good servants." -- Robert Greenleaf

Daily Tech Digest - August 17, 2021

It May Be Too Early to Prepare Your Data Center for Quantum Computing

The fact that there are multiple radically different approaches to quantum computing under development, with no assurance that any will meet market success (let alone market dominance), speaks to quantum computing's infancy. Merzbacher compares the situation to the early days of microprocessors, when there was a debate on whether computer chips should be made of silicon or germanium. "There were arguments for germanium. It's a better system for semiconductor computing in some sense, but it's expensive, not as easy to manufacture, and it's not as common, so in the end, it was silicon," she said. Quantum computing hasn't reached a point where "everybody settled on a technology here, and so there still is uncertainty. It may be that the IBM approach is better for certain types of computing, and then the trapped-ion approaches [are] better for others." This past March, IonQ became the first publicly traded pure-play quantum computing company via a SPAC merger. According to Merzbacher, the startup appears to have its eye on marketing rack-mounted quantum hardware to the data center market, although it hasn't voiced such intentions publicly.


Lucas Cavalcanti on Using Clojure, Microservices, Hexagonal Architecture ...

One thing to mention about the Cockburn Hexagonal Architecture, is that it was born into a Java object or entered word. And just to get a context. So what we use, it's not exactly that implementation. But it uses that idea as an inspiration. So I think on the Coburn's idea is you have a web server. And at every operation that web server is a port and you'll have the adapter, which a port that's an interface. And then the above adapter is the actual implementation of that interface. And the rest is how to implement the classes implement in that. The implementation, we use that idea of separating a port, that it's the communication with the external world from the adapter, which is the code that translate that communication to actual code that you can execute. And then the controller is the piece that gets that communication from the external world, and runs the actual business logic. I think the Cockburn definition stops at the controller. And after the controller, it's already business logic. Since we are working on Clojure and functional programming.


Excel 4, Yes Excel 4, Can Haunt Your Cloud Security

Scary? Sure, but still, how hard can it be to spot a macro attack? It’s harder than you might think. Vigna explained XLM makes it easy to create dangerous but obfuscated code. It started with trivial obfuscation methods. For example, the code was written hither and yon on and written using a white font on a white background. Kid’s stuff. But, later versions started using more sophisticated methods such as hiding by using the VeryHidden flag instead of Hidden. Users can’t unhide a VeryHidden flag from Excel. You must uncover VeryHidden data with a VBA script or even resort to a hex editor. How many Excel users will even know what a hex editor is, never mind use it? Adding insult to injury, Excel 4 doesn’t differentiate between code and data. So, yes what looks like data may be executed as code. It gets worse. Vigna added “Attackers may build the true payload one character at a time. They may add a time dependence, making the current day a decryption key for the code. On a wrong day, you’ll just see gibberish.” As VMware security researcher Stefano Ortolani added, Excel 4.0 macros are “easy to use but also easy to complicate.”


Agile Data Labeling: What it is and why you need it

The concept of Autolabeling, which consists of using an ML model to generate “synthetic” labels, has become increasingly popular in the most recent years, offering hope to those tired of the status quo, but is only one attempt at streamlining data labeling. The truth, though is, no single approach will solve all issues: at the center of autolabeling, for instance, is a chicken-and-egg problem. That is why the concept of Human-in-the-Loop labeling is gaining traction. That said, those attempts feel uncoordinated and bring little to no relief to companies who often struggle to see how those new paradigms apply to their own challenges. That’s why the industry is in need of more visibility and transparency regarding existing tools (a wonderful initial attempt at this is the TWIML Solutions Guide, though it’s not specifically targeted towards labeling solutions), easy integration between those tools, as well as an end-to-end labeling workflow that naturally integrates with the rest of the ML lifecycle. Outsourcing the process might not be an option for specialty use cases for which no third party is capable of delivering satisfactory results. 


Brain-computer interfaces are making big progress this year

The ability to translate brain activity into actions was achieved decades ago. The main challenge for private companies today is building commercial products for the masses that can find common signals across different brains that translate to similar actions, such as a brain wave pattern that means “move my right arm.” This doesn’t mean the engine should be able to do so without any fine tuning. In Neuralink’s MindPong demo above, the rhesus monkey went through a few minutes of calibration before the model was fine-tuned to his brain’s neural activity patterns. We can expect this routine to happen with other tasks as well, though at some point the engine might be powerful enough to predict the right command without any fine-tuning, which is then called zero-shot learning. Fortunately, AI research in pattern detection has made huge strides, specifically in the domains of vision, audio, and text, generating more robust techniques and architectures to enable AI applications to generalize. The groundbreaking paper Attention is all you need inspired many other exciting papers with its suggested ‘Transformer’ architecture. 


Here’s how hackers are cracking two-factor authentication security

Our experiments revealed a malicious actor can remotely access a user’s SMS-based 2FA with little effort, through the use of a popular app (name and type withheld for security reasons) designed to synchronize user’s notifications across different devices. Specifically, attackers can leverage a compromised email/password combination connected to a Google account (such as username@gmail.com) to nefariously install a readily available message mirroring app on a victim’s smartphone via Google Play. This is a realistic scenario since it’s common for users to use the same credentials across a variety of services. Using a password manager is an effective way to make your first line of authentication — your username/password login — more secure. Once the app is installed, the attacker can apply simple social engineering techniques to convince the user to enable the permissions required for the app to function properly. For example, they may pretend to be calling from a legitimate service provider to persuade the user to enable the permissions. After this, they can remotely receive all communications sent to the victim’s phone, including one-time codes used for 2FA.


Agile drives business growth, but culture is stifling progress

Senior leaders who invest in upskilling will ensure a culture of innovation in the enterprise. Skills needed today and in the future are identified and learning curves accelerated by providing immersive experiences to supplement learning. At Infosys, we categorize employees into different skill horizons based on workers’ core, digital, and emerging skills. For staying close to the customer through better insights, data is not just a lazy asset locked in systems of record — it is accessible through an end-to-end system that translates customer insights into action. Going further, artificial intelligence taps into unspoken team behaviors and interactions, which research from CB Insights found increases revenue by as much as 63%. Teams will also need to collaborate effectively and make decisions on their own. This will only happen if leaders understand when to guide and when to trust. In our research, we found that the most effective Agile firms (we call these “Sprinters”) are much more likely to foster servant leadership, along with the seven levers described.


Attackers Change Their Code Obfuscation Methods More Frequently

In an analysis posted last week, researchers at the Microsoft 365 Defender Threat Intelligence Team tracked one cybercriminal group's phishing campaign as the techniques changed at least 10 times over the span of a year. The campaign, dubbed XLS.HTML by the researchers, used plaintext, escape encoding, base64 encoding, and even Morse code, the researchers said. Changing up the encoding of attachments and data is not new, but highlights that attackers understand the need to add variation to avoid detection, the Microsoft researchers said. Microsoft's research is not the first to identify the extensive use of obfuscation. Such techniques are as old as malware itself, but more recently, attackers are switching up their obfuscation techniques more frequently. In addition, increasingly user-friendly tools used by cybercriminals intent on phishing make using sophisticated obfuscation much easier. Messaging security provider Proofpoint documented seven obfuscation techniques in a paper published five years ago, and even then, many of the obfuscation techniques were not new, the company said.


Navigating an asymmetrical recovery

The key for many businesses will be to build scenarios that account for a wider diffusion of results than was needed in the past. Take the cinema business as an example. Instead of sales projections being drawn up in a band between down-10% and up-10%, we’ve seen that some businesses can find themselves in a band between down-70% and up-80%. An unexpected upside sounds like a nice problem to have, but it also can create real operating challenges. Few of the companies whose growth was supercharged during the pandemic had a plan for that level of growth, which led to shortages, stock-outs, and delays that undermined performance. Planning for extremes is almost certain to be critical for some time to come. Although there is considerable liquidity overall in the debt markets, whether from traditional loans, bonds, or newer debt funds, companies’ ability to access these markets will vary widely. Regional and country differences in government support, along with variations in capital availability between companies of different sectors and size, are all creating additional asymmetries and unpredictable balance sheet pressures. 


Driving DevOps With Value Stream Management

A value stream, such as a DevOps pipeline, is simply the end-to-end set of activities that delivers value to our customers, whether internal or external to the organization. In an ideal state, work and information flow efficiently with minimal delays or queuing or work items. So far, this all sounds great. But good things seldom come easily. Let's start with the fact that there are hundreds of tools available to support a Dev(Sec)Ops toolchain. Moreover, it takes specific skills, effort, costs, and time to integrate and configure the tools selected by your organization. While software developers perform the integration effort, the required skills may differ from those available in your software development teams. Also, such work takes your developers away from their primary job of delivering value via software products for your internal and external customers. In short, asking your development teas to build their Dev(Sec)Ops toolchain configurations is a bit like asking manufacturing operators to build their manufacturing facilities. 



Quote for the day:

"Great leaders are almost always great simplifiers who can cut through argument, debate and doubt to offer a solution everybody can understand." -- General Colin Powell

Daily Tech Digest - August 16, 2021

Pepperdata CEO says AI ambitions outpace data management reality

When we had just classic databases, data warehouses, and stuff like data was managed sort of centrally, people had a very well-defined view of what was going on. It was very narrow in scope. That definition has been blown to smithereens. It’s like everything is enterprise data. It’s just ballooned. ... People are realizing that data for customer success is really important. That part is becoming more obvious to more people. If somebody comes to my website, and I take three days to respond to them, they’re going to be gone. But if I can respond to them in 30 seconds and say something intelligent, all of a sudden that interaction becomes much more valuable. My sales cycles become much shorter. The rest of it, concerning how to use the data to more efficiently run my business, however, is completely unclear at this point. ... Every time we do a new technology and all of a sudden people invest a ton in it, then you find your finance people are writing it off. This is no different. The data wave has been hyped so much that people are putting more and more money into it. They got to be like Google. They have to be like Facebook. 


Banks are moving their core operations into the cloud at a rapid rate. But new tech brings new challenges

As with any concentrated market, there is a risk that cloud providers might start dictating their own terms, at the expense of the stability of the financial system. For example, they could refuse to be transparent by failing to open up their technologies to third-party scrutiny, meaning that it would be impossible to know if providers have baked in sufficient resiliency to carry out banking operations. Modernizing is key, therefore, but it needs to be done cautiously, and with a reliable strategy. For James, the best way forward is to deploy multi-cloud configurations in the financial sector to balance the risk across multiple providers. Only 17% of the financial institutions surveyed by Google have already adopted multi-cloud as an architecture of choice, while 28% rely on single cloud. According to the company, more work needs to be done from a regulatory aspect to incentivize a robust and responsible adoption of cloud among financial organizations. "Consumers' demand for very quick transformation is becoming really overwhelming, and financial services organizations will take shortcuts to deliver on customer expectations as soon as possible," said James.


What is federated learning?

Federated learning starts with a base machine learning model in the cloud server. This model is either trained on public data (e.g., Wikipedia articles or the ImageNet dataset) or has not been trained at all. In the next stage, several user devices volunteer to train the model. These devices hold user data that is relevant to the model’s application, such as chat logs and keystrokes. These devices download the base model at a suitable time, for instance when they are on a wi-fi network and are connected to a power outlet (training is a compute-intensive operation and will drain the device’s battery if done at an improper time). Then they train the model on the device’s local data. After training, they return the trained model to the server. Popular machine learning algorithms such as deep neural networks and support vector machines is that they are parametric. Once trained, they encode the statistical patterns of their data in numerical parameters and they no longer need the training data for inference. Therefore, when the device sends the trained model back to the server, it doesn’t contain raw user data. 


How One Rogue User Took Down Our API

A team of developers won’t be able to suss out all the various bugs in your services, but thousands of users will. And it only takes one to exploit a weakness. While our zealous user was the flapping butterfly wing that lead to the tornado, it was aided and abetted by our own bad assumptions. Fortunately, there are strategies and tools you can use to mitigate these situations. If you’re lucky, you have a Quality Assurance team dedicated to catching bugs. Have you heard the one about a QA tester walking into a bar? Even if you do have a QA team — and especially if you don’t — automated load, end-to-end, and fuzz testing will also help catch those tricky bugs. I would recommend reading Martin Fowler’s article on The Practical Test Pyramid. In the end, APIs are like chainsaws. They are powerful tools intended to that empower our users. But that power needs to come with the necessary safety measures. Without them, your users may end up causing a lot of undue damage to both themselves and you.


Reliance on third party workers making companies more vulnerable to cyberattacks

Too many organizations lack automated and effective methods to centrally track and manage their relationships with the burgeoning number of third parties with whom they do business. This, coupled with the lack of information organizations have about these third parties, makes them a cybercriminal’s best friend. The recent Presidential Executive Order (EO) mandates the federal government “improve its efforts to identify, deter, protect against, detect, and respond to these actions and actors.” For organizations looking to make changes to their third party identity risk security measures, there are steps they can implement today including: properly identifying who each third party is and the sensitive data to which they have access; conducting regular user audits to ensure third parties have access based on the least amount of privilege necessary to do their jobs; extending zero trust programs to third party non-employees; and conducting continuous risk ratings of the individuals working within a third party vendor or partner, not just the organization as a whole.


The Intersection of Ecommerce and NFTs: How NFT Technology is Changing DeFi

DeFi (decentralized finance) technology allows for the inherent convenience of centralized markets without allowing the wealth and governance authority to pool into one person’s wallet. Essentially, DeFi is enabled by the blockchain, which enables permission-less, peer-to-peer transactions. This removes middlemen like banks and other large financial institutions. It lowers costs and technical barriers for entrepreneurs and individuals. Fees, documentation, and legal jurisdictions prevent many people across the world from accessing the financial tools they need to succeed. DeFi platforms circumvent the need for all of these things and allow them to transact in a secure environment. NFTs are the driving force behind a significant portion of the DeFi infrastructure. NFTs aren’t limited to collectibles. They represent programmable bits of data stored on the blockchain. The blockchain provides a transparent, hack-proof storage solution. This equates to ownership over pieces of data that can be programmed to do different things when interacted with.


Top seven transformation trends dominating the digital ecosystem

Digital transformation essentially boils down to unlocking value for customers. McKinsey estimates that digital transformation initiatives that focus on customer-centricity increase customer satisfaction by 20-30% and economic gains by 20-50%. Organisations investing in digital transformation are looking to deliver innovative and seamless customer experiences in real-time. There is a greater focus on customer lifetime value (CLV) and the role of innovative customer experiences on long-term customer value. In a continuously evolving digital ecosystem, with no dearth of choice and convenience, customer behaviours are rapidly changing. In such a world, businesses need a holistic view of the entire customer lifecycle to go beyond transactional interactions and establish trust. Organisations are connecting each step in the customer journey to interact and understand prominent needs and gain an exceptional number of improvement opportunities. This is possible by implementing an automated data collection process and creating a universally available data repository for accurate, traceable, and updated information. 


Are enterprises loving managed services?

The reason enterprises want to reduce their network-management burden is difficulty in acquiring and maintaining skilled network-operations specialists. This has been a problem for decades; network-operations specialists have no career paths in most enterprises, so they top out in salary and promotion opportunity. Over half of the 59 enterprises I talked with said that they had a problem retaining a network specialist for more than three years, and 12 said they had problems retaining them for two years. Every enterprise said that it took longer to find qualified network specialists than programmers. ... A close second in terms of managed-service drivers was difficulty in supporting remote sites. The problem with remote network support, said 50 of my managed-service enterprises, is that the best way for diagnosis of network problems at remote sites requires that the network be used to project central technology skills to those locations. Obviously, that's Catch-22 in action. This is one reason why SD-WAN is so often associated with managed services; SD-WAN is all about adding small, remote, sites to the company VPN. 


Windows 365 response shows enterprises are hungry for cloud OS option

While a cloud OS may be attractive to some organizations and users, there will be others that require additional app support that will still require access to a machine with an onboard OS and apps (or at least browser-based access to a different cloud). Many legacy enterprise apps may not run in such an environment and are very unlikely to be migrated. Those users may not be good candidates for a Windows 365 deployment. As a result, I don’t see a cloud OS like Windows 365 becoming the universal (or even dominant) OS anytime soon. The bottom line is, enterprises that are struggling with managing multiple device types (e.g., PC and Macs, Android and iOS, Chromebooks) that need a single access point (and a single license) to apps might find Windows 365 an attractive option over buying multiple licenses and/or managing multiple user device types at substantial costs. Managing a cloud-based OS is far easier than managing installed OS and app combinations. But for most companies, the current limitations of Windows 365, and a need to run many internal mature and legacy apps, will make Windows 365 a future rather than a current option.


How can we trust a digital identity? A security CEO explains…

It's hard to prove digital identity, and many of the current approaches – such as email + password + SMS PIN codes – add complexity for the user without actually addressing the core issue, which is: can these identities be trusted? As I mentioned before, you can have email addresses that represent your identity online. And you could have multiple [email] addresses – for example, it's easy to get Gmail addresses; they're free, and so bad actors can exploit that freedom. You can have multiple accounts created by using those multiple free email addresses, and bad actors can hide behind them either to commit fraud or just to spread fake news. The trick is very much to have some sort of balance between the freedom and the friction; the freedom and that proven identity. ... But if you can make it something that has an associated security factor – and the phone does this, because it's got the SIM card – then you can have that thing which you're willing to share, but at the same time has a proven credential, and that allows you to build trust associated with that.



Quote for the day:

"Leaders make decisions that create the future they desire." -- Mike Murdock

Daily Tech Digest - August 15, 2021

Scientists removed major obstacles in making quantum computers a reality

Spin-based silicon quantum electronic circuits offer a scalable platform for quantum computation. They combine the manufacturability of semiconductor devices with the long coherence times afforded by spins in silicon. Advancing from current few-qubit devices to silicon quantum processors with upward of a million qubits, as required for fault-tolerant operation, presents several unique challenges. One of the most demanding is the ability to deliver microwave signals for large-scale qubit control. ... Completely reimagine the silicon chip structure is the solution to the problem. Scientists started by removing the wire next to the qubits. They then applied a novel way to deliver microwave-frequency magnetic control fields across the entire system. This approach could provide control fields to up to four million qubits. Scientists added their newly developed component called a crystal prism called a dielectric resonator. When microwaves are directed into the resonator, it focuses the wavelength of the microwaves down to a much smaller size.


Agile strategy: 3 hard truths

One of the primary challenges is that leadership can often be a barrier when an organization is seeking to become more agile. According to last year’s Business Agility Report from Scrum Alliance and the Business Agility Institute, this is the most prevalent challenge that agile coaches report. Some reasons for this include a lack of buy-in and support, resistance to change, having a mindset that’s not conducive to agility, a lack of alignment between agile teams and leadership, lack of understanding, and a deeply rooted organizational legacy regarding management styles. Overcoming legacy structures, cultures, and mindsets can be difficult. Some coaches have reported that leaders view agile as being “for their staff” and not for them. Additionally, leaders may have competing priorities – such as retaining control – which can hinder organization-wide adoption of agile methodologies. Any leader considering an agile transformation must understand that in order to succeed, full executive buy-in is needed and that they too will need to change their way of working and thinking.


Custom Rate Limiting for Microservices

API providers use rate limit design patterns to enforce API usage limits on their clients. It allows API providers to offer reliable service to the clients. This also allows a client to control its API consumption. Rate limiting, being a cross-cutting concern, is often implemented at the API Gateway fronting the microservices. There are a number of API Gateway solutions that offer rate-limiting features. In many cases, the custom requirements expected of the API Gateway necessitate developers to build their own API Gateway. The Spring Cloud Gateway project provides a library for developers to build an API Gateway to meet any specific needs. In this article, we will demonstrate how to build an API Gateway using the Spring Cloud Gateway library and develop custom rate limiting solutions. A SaaS provider offers APIs to verify the credentials of a person through different factors. Any organization that utilizes the services may invoke APIs to verify credentials obtained from national ID cards, face images, thumbprints, etc. The service provider may have a number of enterprise customers that have been offered a rate limit - requests per minute, and a quota - requests per day, depending on their contracts.


Google Introduces Two New Datasets For Improved Conversational NLP

Conversational agents are a dialogue system through NLP to respond to a given query in human language. It leverages advanced deep learning measures and natural language understanding to reach a point where conversational agents can transcend simple chatbot responses and make them more contextual. Conversational AI encompasses three main areas of artificial intelligence research — automatic speech recognition (ASR), natural language processing (NLP), and text-to-speech (TTS or speech synthesis). These dialogue systems are utilised to read from the input channel and then reply with the relevant response in graphics, speech, or haptic-assisted physical gestures via the output channel. Modern conversational models often struggle when confronted with temporal relationships or disfluencies.The capability of temporal reasoning in dialogs in massive pre-trained language models like T5 and GPT-3 is still largely under-explored. The progress on improving their performance has been slow, in part, because of the lack of datasets that involve this conversational and speech phenomena.


Addressing the cybersecurity skills gap through neurodiversity

Having a career in cybersecurity typically requires logic, discipline, curiosity and the ability to solve problems and find patterns. This is an industry that offers a wide spectrum of positions and career paths for people who are neurodivergent, particularly for roles in threat analysis, threat intelligence and threat hunting. Neurodiverse minds are usually great at finding the needle in the haystack, the small red flags and minute details that are critical for hunting down and analyzing potential threats. Other strengths include pattern recognition, thinking outside the box, attention to detail, a keen sense of focus, methodical thinking and integrity. The more diverse your teams are, the more productive, creative and successful they will be. And not only can neurodiverse talent help strengthen cybersecurity, employing different minds and perspectives can also solve communication problems and create a positive impact for both your team and your company. According to the Bureau of Labor Statistics, the demand for Information Security Analysts — one of the common career paths for cybersecurity professionals — is expected to grow 31% by 2029, much higher than the average growth rate of 4% for other occupations.


Realizing IoT’s potential with AI and machine learning

Propagating algorithms across an IIoT/IoT network to the device level is essential for an entire network to achieve and keep in real-time synchronization. However, updating IIoT/IoT devices with algorithms is problematic, especially for legacy devices and the networks supporting them. It’s essential to overcome this challenge in any IIoT/IoT network because algorithms are core to AI edge succeeding as a strategy. Across manufacturing floors globally today, there are millions of programmable logic controllers (PLCs) in use, supporting control algorithms and ladder logic. Statistical process control (SPC) logic embedded in IIoT devices provides real-time process and product data integral to quality management succeeding. IIoT is actively being adopted for machine maintenance and monitoring, given how accurate sensors are at detecting sounds, variations, and any variation in process performance of a given machine. Ultimately, the goal is to predict machine downtimes better and prolong the life of an asset.


Understanding and applying robotic process automation

RPA can allow businesses to reallocate their employees, removing them from repetitive tasks and engaging them in projects that support true growth, both for the company and individual. Work were human strengths such as emotional intelligence, reasoning and judgment are required typically bring greater value to the company, and, they’re also often more personally rewarding. This can raise job satisfaction and help retain employees. Further, the ability to reallocate employees can enable a business to apply their useful company knowledge to other value-adding areas, supplement talent gaps and more. Of course, there’s the attraction of being able to do one’s job more efficiently, without manual processes that can make time drag. For instance, let’s say you’re at that same investment firm and there a rapidly growing hedge fund, requiring human resources (HR) to onboard a lot of people fast. Between provisioning accounts, providing access to the right tools, sending out emails and more, there’s a lot of work involved. With a RPA bot, 20 new people could be processed at once, with the HR person monitoring progress through a window on the corner of their screen, which also notifies them if anything needs their attention.


It's time for AI to explain itself

Ultimately, organizations may not have much choice but to adopt XAI. Regulators have taken notice. The European Union's General Data Protection Regulation (GDPR) demands that decisions based on AI be explainable. Last year, the U.S. Federal Trade Commission issued stringent guidelines around how such technology should be used. Companies found to have bias embedded in their decision-making algorithms risk violating multiple federal statutes, including the Fair Credit Reporting Act, the Equal Credit Opportunity Act, and antitrust laws. "It is critical for businesses to ensure that the AI algorithms they rely on are explainable to regulators, particularly in the antitrust and consumer protection space," says Dee Bansal, a partner at Cooley LLP, which specializes in antitrust litigation. "If a company can't explain how its algorithms work [and] the contours of the data on which they rely … it risks being unable to adequately defend against claims regulators may assert that [its] algorithms are unfair, deceptive, or harm competition." It's also just a good idea, notes James Hodson, CEO of the nonprofit organization AI for Good.


AI ethics in the real world: FTC commissioner shows a path toward economic justice

The value of a machine learning algorithm is inherently related to the quality of the data used to develop it, and faulty inputs can produce thoroughly problematic outcomes. This broad concept is captured in the familiar phrase: "Garbage in, garbage out." The data used to develop a machine-learning algorithm might be skewed because individual data points reflect problematic human biases or because the overall dataset is not adequately representative. Often skewed training data reflect historical and enduring patterns of prejudice or inequality, and when they do, these faulty inputs can create biased algorithms that exacerbate injustice, Slaughter notes. She cites some high-profile examples of faulty inputs, such as Amazon's failed attempt to develop a hiring algorithm driven by machine learning, and the International Baccalaureate's and UK's A-Level exams. In all of those cases, the algorithms introduced to automate decisions kept identifying patterns of bias in the data used to train them and attempted to reproduce them. ... " 


How to navigate technology costs and complexity with enterprise architecture

The modern business world is increasingly driven by technology. As we move to a more interconnected and complex environment, the demand for suitable technologies is increasing – this is so much so that an average enterprise pays for approximately 1,516 applications. With a shift to remote working, we’re also seeing an overwhelming imperative to migrate to the cloud, and today, application costs are estimated to make up 80 per cent of the entire IT budget. Industry analyst Gartner has even forecasted that worldwide IT spending will reach $4 trillion in 2021. The modern chief information officer (CIO) is responsible for understanding these technology costs and bringing them under control – and a key enabler of this is enterprise architecture (EA). By providing a strategic view of change, EA ensures alignment of the business and IT operations, facilitating agility, speed and the ability to make real-time decisions based on reliable and consistent data. So, what are the common challenges of spiralling technology costs and how can EA help to reduce this pressure for CIOs?



Quote for the day:

“Patience is the calm acceptance that things can happen in a different order than the one you have in mind.” -- David G. Allen