Daily Tech Digest - September 22, 2020

How industrial AI will power the refining industry in the future

The ultimate vision for the industry is the self-optimising, autonomous plant – and the increasing deployment of artificial intelligence (AI) across the sector is bringing the reality of this ever closer. However, while refining has been an early adopter of many digital tools, the industry is yet to fully realise the potential of industrial AI. That is, in no small part, because AI and machine learning are too often looked at in isolation, rather than being combined with existing engineering capabilities – models, tools and expertise, to deliver a practical solution that effectively optimises refinery assets. ... Machine learning is used to create the model, leveraging simulation, plant or pilot plant data. The model also uses domain knowledge, including first principles and engineering constraints, to build an enriched model — without requiring the user to have deep process expertise or be an AI expert. The solutions supported by hybrid models act as a bridge between the first principles-focused world of the past and the “smart refinery” environment of the future. They are the essential catalyst helping to enable the self-optimising plant.


Microsoft's new feature uses AI to make video chat less weird

Eye Contact uses the custom artificial intelligence (AI) engine in the Surface Pro X's SQ1 SOC, so you shouldn't see any performance degradation, as much of the complex real-time computational photography is handed off to it and to the integrated GPU. Everything is handled at a device driver level, so it works with any app that uses the front-facing camera -- it doesn't matter if you're using Teams or Skype or Slack or Zoom, they all get the benefit. There's only one constraint: the Surface Pro X must be in landscape mode, as the machine learning model used in Eye Contact won't work if you hold the tablet vertically. In practice that shouldn't be much of an issue, as most video-conferencing apps assume that you're using a standard desktop monitor rather than a tablet PC, and so are optimised for landscape layouts. The question for the future is whether this machine-learning approach can be brought to other devices. Sadly it's unlikely to be a general-purpose solution for some time; it needs to be built into the camera drivers and Microsoft here has the advantage of owning both the camera software and the processor architecture in the Surface Pro X.


Digital transformation: 5 ways the pandemic forced change

Zemmel says that the evolution of the role of the CIO has been accelerated as well. He sees CIOs increasingly reporting to the CEO because they increasingly have a dual mandate. In addition to their historical operational role running the IT department, they now are also customer-facing and driving revenue. That mandate is not new for forward-looking IT organizations, but the pandemic has made other organizations hyper-aware of IT’s role in driving change quickly. CIOs are becoming a sort of “chief influencing officer who is breaking down silos and driving adoption of digital products,” Zemmel adds. Experian’s Libenson puts it this way: “The pandemic has forced us to be closer to the business than before. We had a seat at the table before. But I think we will be a better organization after this.” The various panelists gave nods to the role of technology, especially the use of data; Zemmel describes the second generation of B2B digital selling as “capturing the ‘digital exhaust’ to drive new analytic insights and using data to drive performance and create more immersive experiences.”


Diligent Engine: A Modern Cross-Platform Low-Level Graphics Library

Graphics APIs have come a long way from a small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. The next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard.  ... This article describes Diligent Engine, a light-weight cross-platform graphics API abstraction layer that is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C/C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. The full source code is available for download at GitHub and is free to use.


Supporting mobile workers everywhere

It is amazing how quickly video conferencing has been accepted as part of the daily routine. Such is the success of services like Zoom that CIOs need to reassess priorities. In a workforce where people are working from home regularly, remote access is not limited to a few, but must be available to all. Mobile access and connectivity for the mobile workforce needs to extend to employees’ homes. Traditional VPN access has scalability limitations and is inefficient when used to provide access to modern SaaS-based enterprise applications. To reach all home workers, some organisations are replacing their VPNs with SD-WANs. There is also an opportunity to revisit bring-your-own-device (BYOD) policies. If people have access to computing at home and their devices can be secured, then CIOs should question the need to push out corporate laptops to home workers. While IT departments may have traditionally deployed virtual desktop infrastructure (VDI) to stream business applications to thin client devices, desktop as a service (DaaS) is a natural choice to delivering a managed desktop environment to home workers. For those organisations that are reluctant to use DaaS in the public cloud, as Oxford University Social Sciences Division (OSSD) has found (see below), desktop software can easily be delivered in a secure and manageable way using containers.


Secure data sharing in a world concerned with privacy

Compliance costs and legal risks are prompting companies to consider an innovative data sharing method based on PETs: a new genre of technologies which can help them bridge competing privacy frameworks. PETs are a category of technologies that protect data along its lifecycle while maintaining its utility, even for advanced AI and machine learning processes. PETs allow their users to harness the benefits of big data while protecting personally identifiable information (PII) and other sensitive information, thus maintaining stringent privacy standards. One such PET playing a growing role in privacy-preserving information sharing is Homomorphic Encryption (HE), a technique regarded by many as the holy grail of data protection. HE enables multiple parties to securely collaborate on encrypted data by conducting analysis on data which remains encrypted throughout the process, never exposing personal or confidential information. Through HE, companies can derive the necessary insights from big data while protecting individuals’ personal details – and, crucially, while remaining compliant with privacy legislation because the data is never exposed.



When -- and when not -- to use cloud native security tools

Cloud native security tools like Amazon Inspector and Microsoft Azure Security Center automatically inspect the configuration of common types of cloud workloads and generate alerts when potential security problems are detected. Google Cloud Data Loss Prevention and Amazon Macie provide similar functionality for data by automatically detecting sensitive information that is not properly secured and alerting the user. To protect data even further there are tools, such as Amazon GuardDuty and Azure Advanced Threat Protection, that monitor for events that could signal security issues within cloud-based and on-premises environments. ... IT teams use services like Google Cloud Armor, AWS Web Application Firewall and Azure Firewall to configure firewalls that control network access to applications running in the cloud. Related tools provide mitigation against DDoS attacks that target cloud-based resources. ... Data stored on the major public clouds can be encrypted electively -- or is encrypted automatically by default -- using native functionality built into storage services like Amazon S3 and Azure Blob Storage. Public cloud vendors also offer cloud-based key management services, like Azure Key Vault and Google Key Management Service, for securely keeping track of encryption keys.


Four Case Studies for Implementing Real-Time APIs

Unreliable or slow performance can directly impact or even prevent the adoption of new digital services, making it difficult for a business to maximize the potential of new products and expand its offerings. Thus, it is not only crucial that an API processes calls at acceptable speeds, but it is equally important to have an API infrastructure in place that is able to route traffic to resources correctly, authenticate users, secure APIs, prioritize calls, provide proper bandwidth, and cache API responses.  Most traditional APIM solutions were made to handle traffic between servers in the data center and the client applications accessing those APIs externally (north-south traffic). They also need constant connectivity between the control plane and data plane, which requires using third-party modules, scripts, and local databases. Processing a single request creates significant overhead — and it only gets more complex when dealing with the east-west traffic associated with a distributed application.  Considering that a single transaction or request could require multiple internal API calls, the bank found it extremely difficult to deliver good user experiences to their customers.


Building the foundations of effective data protection compliance

Data protection by design and default needs to be planned within the whole system, depending on the type of data and how much data a business has. Data classification is the categorization of data according to its level of sensitivity or value, using labels. These are attached as visual markings and metadata within the file. When classification is applied the metadata ensures that the data can only be accessed or used in accordance with the rules that correspond with its label. Businesses need to mitigate attacks and employee mistakes by starting with policy - assessing who has access. Then they should select a tool that fits the policy, not the other way round; you should never be faced with selecting a tool and then having to rewrite your policy to fit it. This will then support users with automation and labelling which will enhance the downstream technology. Once data is appropriately classified, security tools such as Data Loss Prevention (DLP), policy-based email encryption, access control and data governance tools are exponentially more effective, as they can access the information provided by the classification label and metadata that tells them how data should be managed and protected.


Q&A on the Book Fail to Learn

People often fear failure because of the stakes associated with it. When we create steep punishment systems and “one-strike-you’re-out” rules, it’s only natural to be terrified of messing up. This is where we need to think more like game designers. Games encourage trial and error because the cost of starting over in a game is practically nothing. If I die playing Halo, I get to respawn and try again immediately. We need to create more “respawn” options in the rest of our lives. This is something that educators can do in their course design. But it’s also something we can encourage as managers, company leaders, or simply as members of society. The best way to do this is to start talking more about our mistakes. These are things we should be able to celebrate, laugh over, shake our collective heads at, and eventually grow from. ... If we go back to people like Dyson and Edison, you see failure-to-success ratios that reach five-thousand or even ten-thousand to one. A venture capitalist who interviewed hundreds of CEOs arrived at the same ratio for start-up companies making it big: about a 10,000:1 failure-to-success ratio. Now, we probably don’t need that many failures in every segment of our lives, but think about how far off most of us are from these numbers.



Quote for the day:

"Leaders need to be optimists. Their vision is beyond the present." -- Rudy Giuliani

Daily Tech Digest - September 21, 2020

Ransomware attacks growing in number, severity: Why experts believe it will get worse

The attackers are also changing their tactics, making it difficult to defend against them. “Those tactics include things like not just encrypting victims’ computer networks but also exfiltrating data on those victim networks,” Holmes said. “Now when victims realize they’ve become a victim of a ransomware attack; it’s not just that their data is encrypted. It’s also that they’ve lost all their — or much of their — information. The ransomware actors are holding that data at risk and telling victims if they don’t pay the ransom, they’re going to leak that data to the public.” Perhaps not surprisingly, ransomware attackers are starting to band together, Holmes said. “We’ve seen some of these ransomware actors entering into a cartel with one another. Under this cartel model, they’ll share information amongst the group members and share intelligence and share techniques. It’s a bit concerning to us as well [because] it shows a major shift change among the ransomware actors.” The ransomware cartels are forming into business-like organizations with specialization of labor, Jason Conboy of DHS’s Investigations division said. “You have your malware office… and they have effectively built skills to write that malware. You’ve got your money mules.


Cloud banking: More than just a CIO conversation

The bank of 2030 will look very different from today. Facing changing consumer expectations, emerging technologies, and alternative business models, banks need to start putting strategies in place now to help them prepare for this future. An important indicator of the shifting landscape? Cloud computing is moving to the forefront as a focus for the chief information officer, C-suite executives, and board members. Banking and capital markets leaders increasingly recognize that cloud is more than a technology; it is a destination for banks and other financial services firms to store data and applications and access advanced software applications via the internet. The leading public cloud providers offer an array of innovative products-as-a-service that can be accessed on their platforms and help banks implement business and operating models to improve revenue generation, increase customer insights, contain costs, deliver market-relevant products quickly and efficiently, and help monetize enterprise data assets. The cloud also offers a huge opportunity to synchronize the enterprise; to break down operational and data silos across risk, finance, regulatory, customer support, and more. Once massive data sets are combined in one place, the organization can apply advanced analytics for integrated insights.


Machine Learning Biases Might Define Minority Health Outcomes

Algorithmic biases present marginalized patients as more healthy, disqualifying them from the speciality care they need. ... AI bias issues are in no way new to the computer science community; however, it is unclear whether the medical community even recognizes the problem. Algorithmic-driven patient care results in less adequate care for people striving to push back against the tide of systemic racism, given that the lack of diversity in big data and automated-decision regulations has not been ameliorated by the United States federal government. In contrast, European data regulations state that machine-automation developers must use appropriate mathematical and statistical techniques to ensure both that risk of error is minimized and that discriminatory effects are prevented. Regardless of geography, it is undeniable that managing these prejudices requires careful attention to data, the use of artificial intelligence to help detect bias, and the building of diverse teams. The federal government must ensure that those behind the automation are ethically and legally obliged to ensure AI is on the side of fairness.


Ireland’s data watchdog slammed for letting adtech carry on ‘biggest breach of all time’

Responding to the ICCL submission, the DPC’s deputy commissioner Graham Doyle sent this statement: “Extensive recent updates and correspondence on this matter, including a meeting, have been provided by the DPC. The investigation has progressed and a full update on the next steps provided to the concerned party.” However in a follow up to Doyle’s remarks, Ryan told TechCrunch he has “no idea” what the DPC is referring to when it mentions a “full update”. On “next steps” he said the regulator informed him it will produce a document setting out what it believes the issues are — within four weeks of its letter, dated September 15. Ryan expressed particular concern that the DPC’s enquiry does not appear to cover security — which is the crux of the RTB complaints, since GDPR’s security principle puts an obligation on processors to ensure data is handled securely and protected against unauthorized processing or loss. ... The regulator also pointed to another related open enquiry — into adtech veteran Quantcast, also beginning in May 2019.The DPC has said the Quantcast enquiry is examining the lawful basis claimed for processing Internet users’ data for ad targeting purposes, as well as considering whether transparency and data retention obligations are being fulfilled.


Write Better Automated Tests With Cypress in Angular

Cypress is much like Protractor for Angular applications, but Cypress is much faster to run and easier to debug. Cypress not only is good at the automated end-to-end testing that is independent of our applications, but also is capable of unit testing against methods in model classes, service classes, and so on. Cypress provides a complete end-to-end testing experience in your browser where you can see your tests in an automated way. Cypress provides it's own test runner where you can do the tests locally. Cypress provides some other cool features like Time Travel, Debuggability, Real-time reloads, Automatic waiting. These brilliant features make apart this tool and we will see these in action. Enough talk. If you've already cloned this repo then you're good to go but if not then you probably have an Angular project and you just have to add Cypress as a dependency. ... The cypress package includes a desktop app and the Cypress binary. It may take a few minutes to run this command for the first time because the package installation needs to download the Cypress binary. The Cypress binary is saved in a global cache directory, so installing this package in the future will be much faster for the same version.


Connecting the dots in an uncertain world

A tendency to underestimate the unexpected — both the chance of it happening and the potential in it — is a common one. To combat this bias, instead of doing “risk management,” which tends to focus on known unknowns, you can build a muscle for planning for and finding potential in the unexpected. This will be what counts during periods of crisis and uncertainty. One way of doing this is to develop a North Star you can use to connect unexpected events to your mission. Take Kaan Terzioglu, CEO of Turkish telecom company Turkcell from 2015 to 2019. When there was an outbreak of serious violence in Turkey in 2016 following an attempted coup d’etat, Kaan and his team responded by making the Internet free for a month so people were able to communicate with those they cared about without worrying about bills. And during a regional refugee crisis, the company’s quick and proactive actions to develop an app to help Syrian refugees find crucial health and financial information in the country calmed tensions in the camps. As Turkcell has built on this initial app, adding more services, customer loyalty among the refugee population has given Turkcell a major market share among the 3 million Syrian refugees in Turkey. 


What advances in HPC and AI mean for business, and what they do not

What businesses do need to be mindful of when looking to deploy AI is that it is not a magic bullet to answering business critical questions. What’s more, and as we’ve seen in several recent unfortunate examples, is if your AI is not trained with a wide set of data, it can end up amplifying a wrong supposition, rendering the end-product useless. For example, if you’re only training your facial recognition programme with pictures of white men, you’re going to get some biased and potentially discriminatory outcomes. As with all forms of modelling, the findings you get out are only as good as the data that you put into it. There are no shortcuts around this simple fact and any organisation trying to do so will quickly become unstuck. The key take-away from these missteps is not that AI has failed or is incapable of carrying out a task; it’s that humans have, can and often do apply the technology poorly. Today, AI is transforming almost every industry and vertical, from pathology, where it’s used to detect cancer, to AI picking up the phone to answer contact centre customer queries, from water controlled indoor farming to driving autonomous vehicles in warehouses and on public roads.


Interview with Event Modeling Founder - Adam Dymitruk

The event model contains independent units of work (slices). This allows not requiring stand-ups, sprints and many other processes that are not effective with remote work. These saved hours are reinvested in working on implementing the solution. The transition to remote work is not even felt with the guidance event modeling brings. Because the patterns are prescribed, event modeling takes the guesswork out of many aspects of writing software. The design allows a fixed and flat cost curve where the complexity of the system doesn't make a project end in a "death march". Event Modeling is simple. Whether you're looking at BPMN or UML, the traditional practices for design are incredibly heavy. The main benefit is that you can get the same value for a much smaller investment of time to create a blueprint for how your current or potential solution works or will work. The goal is to sweep aside the mountain of books and get on organization going quickly. It takes 15 minutes to explain Event Modeling. The other main benefit is having a reliable plan to start implementation. Most new projects in software today head into starting development blind. An important test that the event model provides is that it shows if you are "information complete".


The 4 Top Artificial Intelligence Trends For 2021

From advancement in medical technology and standards of care, to advances in communication technology that enabled outbreaks to be spotted more quickly and lockdowns imposed. Over the next year, AI will be added to the list of technological developments that are enabling us to more effectively deal with pandemics. The growth in the amount of scientific and medical literature alone is enormous, with more than 28,000 papers published by April this year relating to Covid-19. A dedicated search engine powered by natural language processing (NLP) algorithms has already been made available, so anyone can get AI assistance when investigating this massive dataset. Work is also ongoing to develop AI solutions to help deal with the huge backlog of other medical issues, such as cancer, where treatment has been affected as resources are diverted to fight Covid-19. Over the next year, we are likely to see the accelerated adoption of AI across many other areas of healthcare, not only related to tackling viruses. By developing our ability to apply machine learning problem-solving to these massive, real-time global datasets, we will spot outbreaks more easily, track contact between infected people, enable more accurate diagnoses, and, by predicting ways that a virus might evolve in the future, develop more effective and lasting vaccinations.


New US digital identity legislation promises more secure verification

The Improving Digital Identity Act would create an Improving Digital Identity Task Force within the executive office of the president. Its mission is to establish a government-wide effort to develop secure methods for federal, state and local government agencies to validate identity attributes and support interoperable digital identity verification in both the public and private sectors. The task force would be comprised of cabinet secretaries, heads of other federal agencies, state and local government officials, congressional committee designated members, and a position appointed by the president. Additionally, the National Institute of Standards and Technology (NIST) would develop a standards framework for digital identity verification to guide federal, state and local governments in selecting their digital identity solutions. NIST would have one year to publish a final version of the framework. The legislation requires the task force to publish a report with recommendations on research and development in systems that enable digital identity verification. Upon its completion and with consent of the individual, the framework will enable government agencies to securely vouch for its citizens in real-time when online.



Quote for the day:

"Power should be reserved for weightlifting and boats, and leadership really involves responsibility." -- Herb Kelleher

Daily Tech Digest - September 20, 2020

Why disaster recovery preparation is even more important during a pandemic

From a cyber perspective, disaster recovery during a pandemic raises new challenges as well. The rapid expansion of remote work introduces new vulnerabilities. Many organizations have relaxed perimeter security controls to allow remote connectivity, introducing new threat vectors that threat actors can exploit to gain access to networks. Lately, many of these attacks have focused on ransomware and data destruction, which encrypt data and often corrupt critical backup systems, rendering existing disaster recovery plans unusable. An "all hands on deck" approach to manual recovery is often the only response to these conditions. Unfortunately, social distancing protocols and remote work arrangements can make those manual recovery efforts an impossibility. ... IT disaster recovery generally falls into one of two categories: A natural disaster event (earthquake, flood, etc.) or a system failure (such as failures in hardware, software or electrical). This year, actual DR responses we have witnessed have included issues with local or regional power outages, or power infrastructure issues. We have seen this across multiple industries including financial services with outages during peak customer windows and prolonged recovery times.


Iranian hacker group developed Android malware to steal 2FA SMS codes

In a report published today, Check Point researchers said they also discovered a potent Android backdoor developed by the group. The backdoor could steal the victim's contacts list and SMS messages, silently record the victim via the microphone, and show phishing pages. But the backdoor also contained routines that were specifically focused on stealing 2FA codes. Check Point said the malware would intercept and forward to the attackers any SMS message that contained the "G-" string, usually employed to prefix 2FA codes for Google accounts sent to users via SMS. The thinking is that Rampant Kitten operators would use the Android trojan to show a Google phishing page, capture the user's account credentials, and then access the victim's account. If the victim had 2FA enabled, the malware's 2FA SMS-intercepting functionality would silently send copies of the 2FA SMS code to the attackers, allowing them to bypass 2FA. But that was not it. Check Point also found evidence that the malware would also automatically forwarding all incoming SMS messages from Telegram and other social network apps. These types of messages also contain 2FA codes, and it's very likely that the group was using this functionality to bypass 2FA on more than Google accounts.


Clean Coders: Why India isn’t on the List

A more vexing element that drives the problem — a majority of the Indian software companies look at software purely as a business. It’s mostly about getting the deliverables ready on the quoted time and almost never about striving for quality results. Consequently, the team treats coding as a task to be ticked off with numbers rather than a task requiring quality — something that would actually educate folks to avoid future mistakes. It’s a chain reaction, really. When the organization itself does not prioritize clean quality coding when a product is being developed, most coders lose the urge to be curious about better practices and approaches since they have to direct all their efforts into meeting deadlines. Even to this day, many skilled professionals in the industry lack the ability to convey their ideas and pain points effectively during client meetings or within the team. Organizations need to establish the fact that coding is only one aspect of the job and that communication is equally important. Especially in the service sector, when we are constantly collaborating on large-scale projects, it’s absolutely crucial for clients and internal teams to be on the same page.


6 big data blunders businesses should avoid

Owing to modern technologies, all trades, irrespective of size, have access to granular and rich data that is based on their operations and clients. The major hurdle in this is dealing with a massive quantity of data that are both challenging to maintain and costly to manage. Despite the presence of appropriate tools, dealing with such data is a cumbersome activity. Errors are a frequent presence with the layers of complexity involved in dealing with Big Data. However, Big Data holds diverse leverages for businesses. ... Thus, Big Data becomes the defining leverage for innovative enterprises to gain an edge over their competitors. The usage of these data is sure to exceed 274.3 billion by 2022 globally with each individual generating approximately 1.7 megabytes of information per second.  With such leverages in point, can you really afford to make mistakes with regard to blunders regarding Big Data? So, here are some big data blunders that businesses need to avoid to harness its full capabilities and enjoy the leverages that it brings.


Tracking global cybercrime activity and the impact on the digital economy

The EMEA region saw lower overall attack rates in comparison to most other global regions from January through June 2020. This is due to a high volume of trusted login transactions across relatively mature mobile apps. The attack patterns in EMEA were also more benign and had less volatility and fewer spikes in attack rates. However, there are some notable exceptions. Desktop transactions conducted from EMEA had a higher attack rate than the global average and automated bot attack volume grew 45% year over year. The UK originates the highest volume of human-initiated cyberattacks in EMEA, with Germany and France second and third in the region. The UK is also the second largest contributor to global bot attacks behind the U.S. One example of a UK banking fraud network saw more than $17 million exposed to fraud across 10 financial services organizations. This network alone consisted of 7,800 devices, 5,200 email addresses and 1,000 telephone numbers. The overall human-initiated attack rate fell through the first half of 2020, showing a 33% decline year over year. The breakdown by sector shows a 23% decline in financial services and a 55% decline in e-commerce attack rates.


Load Testing APIs and Websites with Gatling: It’s Never Too Late to Get Started

If it is your first time load testing, whether you already know the target user behavior or not, you should start with a capacity test. Stress testing is useful but analyzing the metrics is really tricky under such a load. Since everything is failing at the same time, it makes the task difficult, even impossible. Capacity testing offers the luxury to go slowly to failure, which is more comfortable for the first analysis. To get started, just run a capacity test that makes your application crash as soon as possible. You only need to add complexity to the scenario when everything seems to run smoothly. ... If an average can give you a quick overview of what happened in a run, it will hide under the rug all the things you actually want to look at. This is where percentiles come in handy. Think of it this way: if the average response time is some amount of milliseconds, how does the experience feel in the worst case for 1% of your user base? Better or worse? How does it feel for 0.1% of your users? And so on, getting closer and closer to zero. The higher the amount of users and requests, the closer you’ll need to get to zero in order to study extreme behaviors. 


Who Should Own Data Governance?

Many organizations position data governance under the Chief Financial Officer (CFO). Other organizations position data governance under the Chief Risk Officer (CRO) or the Chief Operational Officer (COO). In addition, some organizations position data governance under the Chief Privacy Officer (CPO) or the Chief Information Security Officer (CISO). These days there are so many C-levels. Placement of data governance under any one of these C-level people is never wrong. Data governance must reside somewhere and having a C-level person as your Executive Sponsor is always a good thing. In fact, many organizations state that senior leadership’s support, sponsorship and understanding of data governance is the number one best practice for starting and sustaining their program. Having a C-level person as your Executive Sponsor often dictates where data governance will reside in the organization. Is it better for data governance to be placed in Finance and report through the CFO than it is to have it reside in Operations and report through the COO? The answer to that question is, “It depends.” It depends on the interest and ability of that person and that part of the organization to provide for the proper level of capacity in terms of resources to operationalize and engage the organization.


Why Are Some Cybersecurity Professionals Not Finding Jobs?

Simply stated, these good people cannot get hired in a cyber job. Going much further, they argue that select organizations (who discuss millions of unfilled jobs) are pushing their own training agendas, certifications offered, want to boost certain company stock prices or have other reasons to encourage this “abundance of cyber jobs remain vacant” narrative, even though it is not true, in their opinion. I want to be clear up front that, I disagree with this narrative. I do believe that many (perhaps millions but we can argue the numbers in another blog) global cybersecurity job vacancies do exist. Nevertheless, I truly sympathize with these people who disagree, and I want to try and help as many as I can find employment. I also want hiring managers to set proper expectations.  In addition to my blogs and articles, I have personally mentored and helped dozens of people find cyber jobs, from high school students to new college graduates to CISOs and CSOs. (Note: this is not my "day job" but one way I try to give back to the security community – just like so many others are doing as well.) I also champion ways that government CISOs struggle in this area, and how tech leaders can find more cyber talent.


Seven Steps to Realizing the Value of Modern Apps

With organizations running a multitude of environments to meet the demands of their applications, each with unique technological requirements, finding the platform isn’t the only challenge. What’s hard is that the development and management is more complex than ever before, with IT and developers navigating traditional apps, cloud- native, SaaS, services and on-prem, for example. Here’s where you need a common ground between IT teams, Lines of Business and developers – where having a single digital platform is critical – to remove the potential for silos springing up, enable the better deployment of resources, and provide a consistent approach to managing applications, infrastructure and business needs together. It’s about creating one, common platform to ‘run all the things’. One, software-defined digital foundation that provides the platform – and choice of where to run IT – to drive business value, create the best environment for developers and help IT effectively manage existing and new technology via any cloud for any application on any device with intrinsic security. One platform that can deliver all apps, enabling developers to use the latest development methodologies and container technologies for faster time to production. 


4 measures to counteract risk in financial services

Financial services regulators across jurisdictions have identified concentration risk as a factor to consider in assessing risk in outsourcing. That risk has two components (i) micro-risk where reliance on a single provider for core operations may present an undue risk of operations if there is a single point of failure and (ii) macro-risk where reliance on financial firms within the ecosystem are so reliant on a vendor that a single point of failure risks causing a broad systemic risk to the operations of the financial services sector. Notably, this risk is not unique to cloud services and, as the Bank of England commented in its Consultation Paper on Outsourcing and Third Party Risk Management, “a small number of third parties have traditionally dominated the provision of certain functions, products, or services to firms, such as cash machines or IT mainframes.” In other words, the issue of concentration risk is not net new but has been a feature within the financial services industry for decades. While cloud remains relatively nascent compared to entrenched providers of legacy systems, like the mainframe, its increasing adoption means that financial institutions must account for, and mitigate against, micro-risk issues of concentration in use of cloud providers.



Quote for the day:

“When we are no longer able to change a situation, we are challenged to change ourselves.” -- Viktor E. Frankl

Daily Tech Digest - September 19, 2020

Why we need XAI, not just responsible AI

There are many techniques organisations can use to develop XAI. As well as continually teaching their system new things, they need to ensure that it is learning correct information and does not use one mistake or piece of biased information as the basis for all future analysis. Multilingual semantic searches are vital, particularly for unstructured information. They can filter out the white noise and minimise the risk of seeing the same risk or opportunity multiple times. Organisations should also add a human element to their AI, particularly if building a watch list. If a system automatically red flags criminal convictions without scoring them for severity, a person with a speeding fine could be treated in the same way as one serving a long prison sentence. For XAI, systems should always err on the side of the positive. If a red flag is raised, the AI system should not give a flat ‘no’ but should raise an alert for checking by a human. Finally, even the best AI system should generate a few mistakes. Performance should be an eight out of ten, never a ten, or it becomes impossible to trust that the system is working properly. Mistakes can be addressed, and performance continually tweaked, but there will never be perfection.


What classic software developers need to know about quantum computing

There are many different parts of quantum that are exciting to study. One is quantum computing using quantum to do any sort of information processing, the other is communication itself. And maybe the third part that doesn't get as much media attention but should is sensing, using quantum computers to sense things much more sensitively than you would classically. So think about sensing very small magnetic fields for example. So the communication aspect of it is just as important because at the end of the day it's important to have secure communication between your quantum computers as well. So this is something exciting to look forward to. ... So the first tool that you need, and one of the most important tools is the one that gives you access to the quantum computers. So if you go to quantum-computing.ibm.com and create an account there, we give you immediate access to several quantum computers, which first of all, every time I say, this just blows my mind because four years ago this wasn't a thing. You couldn't go online and access a quantum computer. I was in grad school because I wanted to do quantum research and needed access to a lab to do this work


Why Darknet Markets Persist

"There are two main reasons here: the lack of alternatives and the ease of use of marketplaces," researchers at the Photon Research Team at digital risk protection firm Digital Shadows tell Information Security Media Group. At least for English-speaking users, such considerations often appear to trump other options, which include encrypted messaging apps as well as forums devoted to cybercrime or hacking. And many users continue to rely on markets despite the threat of exit scams, getting scammed by sellers or getting identified and arrested by police. Another option is Russian-language cybercrime forums, which continue to thrive, with many hosting high-value items. But researchers say that, even when armed with translation software, English speakers often have difficulty coping with Russian cybercrime argot. Many Russian speakers also refuse to do business with anyone from the West. ... Demand for new English-language cybercrime markets continues to be high because so many existing markets get disrupted by law enforcement agencies or have administrators who run an exit scam. Before Empire, other markets that closed after their admins "exit scammed" have included BitBazaar in August, Apollon in March and Nightmare in August 2019.


Open Data Institute explores diverse range of data governance structures

The involvement of different kinds of stakeholders in any particular institution also has an effect on what kinds of governance structures would be appropriate, as different incentives are needed to motivate different actors to behave as responsible and ethical stewards of the data. In the context of the private sector, for example, enterprises that would normally adopt a cut-throat, competitive mindset need to be incentivised for collaboration. Meanwhile, cash-strapped third-sector organisations, such as charities and non-governmental organisations (NGOs), need more financial backing to realise the potential benefits of data institutions. “Many [private sector] organisations are well-versed in stewarding data for their own benefit, so part of the challenge here is for existing data institutions in the private sector to steward it in ways that unlock value for other actors, whether that’s economic value from say a competition point of view, but then also from a societal point of view,” said Hardinges. “Getting organisations to consider themselves data institutions, and in ways that unlock public value from private data, is a really important part of it.”


5 supply chain cybersecurity risks and best practices

Falling prey to the "it couldn't happen to us" mentality is a big mistake. But despite clear evidence that supply chain cyber attacks are on the rise, some leaders aren't facing that reality, even if they do understand techniques to build supply chain resilience more broadly. One of the biggest supply chain challenges is leaders thinking they're not going to be hacked, said Jorge Rey, the principal in charge of information security and compliance for services at Kaufman Rossin, a CPA and advisory firm in Miami. To fully address supply chain cybersecurity, supply chain leaders must realize they need to face the risk reality. The supply chain is veritable smorgasbord of exploit opportunities -- there are so many information and product handoffs in even a simpler one -- and each handoff represents risks, especially where digital technology is involved but easily overlooked. ... Supply chain cyber attacks are carried out with different goals in mind -- from ransom to sabotage to theft of intellectual property, Atwood said. These cyberattacks can also take many forms, such as hijacking software updates and injecting malicious code into legitimate software, as well as targeting IT and operational technology and hitting every domain and any node, Atwood said.


Moving Toward Smarter Data: Graph Databases and Machine Learning

Data plays a significant role in machine learning, and formatting it in ways that a machine learning algorithm can train on is imperative. Data pipelines were created to address this. A data pipeline is a process through which raw data is extracted from the database (or other data sources), is transformed, and is then loaded into a form that a machine learning algorithm can train and test on. Connected features are those features that are inherent in the topology of the graph. For example, how many edges (i.e., relationships) to other nodes does a specific node have? If many nodes are close together in the graph, a community of nodes may exist there. Some nodes will be part of that community while others may not. If a specific node has many outgoing relationships, that node’s influence on other nodes could be higher, given the right domain and context. Like other features being extracted from the data and used for training and testing, connected features can be extracted by doing a custom query based on the understanding of the problem space. However, given that these patterns can be generalized for all graphs, unsupervised algorithms have been created that extract key information about the topology of your graph data and used as features for training your model.


Dark Side of AI: How to Make Artificial Intelligence Trustworthy

Malicious inputs to AI models can come in the form of adversarial AI, manipulated digital inputs or malicious physical inputs. Adversarial AI may come in the form of socially engineering humans using an AI-generated voice, which can be used for any type of crime and considered a “new” form of phishing. For example, in March of last year, criminals used AI synthetic voice to impersonate a CEO’s voice and demand a fraudulent transfer of $243,000 to their own accounts. Query attacks involve criminals sending queries to organizations’ AI models to figure out how it's working and may come in the form of a black box or white box. Specifically, a black box query attack determines the uncommon, perturbated inputs to use for a desired output, such as financial gain or avoiding detection. Some academics have been able to fool leading translation models by manipulating the output, resulting in an incorrect translation. A white box query attack regenerates a training dataset to reproduce a similar model, which might result in valuable data being stolen. An example of such was when a voice recognition vendor fell victim to a new, foreign vendor counterfeiting their technology and then selling it, which resulted in the foreign vendor being able to capture market share based on stolen IP.


DDoS attacks rise in intensity, sophistication and volume

The total number of attacks increased by over two and a half times during January through June of 2020 compared to the same period in 2019. The increase was felt across all size categories, with the biggest growth happening at opposite ends of the scale – the number of attacks sized 100 Gbps and above grew a whopping 275% and the number of very small attacks, sized 5 Gbps and below, increased by more than 200%. Overall, small attacks sized 5 Gbps and below represented 70% of all attacks mitigated between January and June of 2020. “While large volumetric attacks capture attention and headlines, bad actors increasingly recognise the value of striking at low enough volume to bypass the traffic thresholds that would trigger mitigation to degrade performance or precision target vulnerable infrastructure like a VPN,” said Michael Kaczmarek, Neustar VP of Security Products. “These shifts put every organization with an internet presence at risk of a DDoS attack – a threat that is particularly critical with global workforces reliant on VPNs for remote login. VPN servers are often left vulnerable, making it simple for cybercriminals to take an entire workforce offline with a targeted DDoS attack.”


Group Privacy and Data Trusts: A New Frontier for Data Governance?

The concept of collective privacy shifts the focus from an individual controlling their privacy rights, to a group or a community having data rights as a whole. In the age of Big Data analytics, the NPD Report does well to discuss the risks of collective privacy harms to groups of people or communities. It is essential to look beyond traditional notions of privacy centered around an individual, as Big Data analytical tools rarely focus on individuals, but on drawing insights at the group level, or on “the crowd” of technology users. In a revealing example from 2013, data processors who accessed New York City’s taxi trip data (including trip dates and times) were able to infer with a degree of accuracy whether a taxi driver was a devout Muslim or not, even though data on the taxi licenses and medallion numbers had been anonymised. Data processors linked pauses in taxi trips with adherence to regularly timed prayer timings to arrive at their conclusion. Such findings and classifications may result in heightened surveillance or discrimination for such groups or communities as a whole. .... It might be in the interest of such a community to keep details about their ailment and residence private, as even anonymised data pointing to their general whereabouts could lead to harassment and the violation of their privacy.


Analysis: Online Attacks Hit Education Sector Worldwide

The U.S. faces a rise in distributed denial-of-service attacks, while Europe is seeing an increase in information disclosures attempts - many of them resulting from ransomware incidents, the researchers say. Meanwhile, in Asia, cybercriminals are taking advantage of vulnerabilities in the IT systems that support schools and universities to wage a variety of attacks. DDoS and other attacks are surging because threat actors see an opportunity to disrupt schools resuming online education and potentially earn a ransom for ending an attack, according to Check Point and other security researchers. "Distributed denial-of-service attacks are on the rise and a major cause of network downtime," the new Check Point report notes. "Whether executed by hacktivists to draw attention to a cause, fraudsters trying to illegally obtain data or funds or a result of geopolitical events, DDoS attacks are a destructive cyber weapon. Beyond education and research, organizations from across all sectors face such attacks daily." In the U.S., the Cybersecurity and Infrastructure Security Agency has warned of an increase in targeted DDoS attacks against financial organizations and government agencies



Quote for the day:

"One of the most sincere forms of respect is actually listening to what another has to say." -- Bryant H. McGill

Daily Tech Digest - September 18, 2020

Windows 10 upgrades are rarely useful, say IT admins

There is a disconnect between Microsoft's efforts and expectations – months of development time and testing to produce features and functionality that customers will clamor for – and the reaction by, in electioneering terms, a landslide-sized majority of those customers. In many cases, IT admins simply shrug at what Microsoft trumpets. "I understand the concept of WaaS, and the ability to upgrade the OS without a wipe/re-install is a good concept," one of those polled said. "((But)) let's concentrate more on useful features, like an upgraded File Explorer, a Start menu that always works, and context-sensitive (and useful) help, and less on, 'It's time to release a new feature update, whether it has any useful new features or not.'" Some were considerably harsher in taking feature upgrades to task. "Don't have a clue why they think some of the new features might be worth our time, or even theirs," said another of those polled. And others decried what they saw as wasted opportunities. "It's mostly bells, whistles and window-dressing," one IT admin said. "It seems like no fundamental problems are tackled. Although updates DO every now and then cause new problems in fundamental functionality. Looks like there's at least some scratching done on the fundamental surface – ((but)) without explanation."


Adaptive Architecture: A Bridge between Fashion and Technology

Conceptually, IT borrowed a lot of themes from Civil Engineering, one being Architecture. Despite the 3000 years that separate both areas, Architecture & Software Architecture share similar words through the multiple definitions that they have, such as "structure", "components", and "environment". At first, that relationship was really strong because the technology was "more concrete", heavier, and, obviously, slower. Everything was super difficult to change and applications used to survive without an update for quite a long time. But, as computers advance, the world is submerged in a massive flow of information on digital platforms and customers can directly connect to businesses through these channels, existing conditions that demand companies to be able to push reliable modifications to their websites, or applications, every day, or even multiple times throughout the day. This progress didn't happen overnight, and as digital evolved, the technical landscape started to change, reflecting new requirements and problems. In 2001, an initiative to understand these obstacles to develop software, obstacles still relevant to this day, seventeen people gathered in the Wasatch mountains of Utah. From that reunion, "The Agile Manifesto" was created, a declaration based on four key values and 12 principles, establishing a mindset called "Agile".


Deep Dive into OWIN Katana

OWIN stands for Open Web Interface for .NET. OWIN is a open standard specification which defines a standard interface between .NET web servers and web applications. The aim is to provide a standard interface which is simple, pluggable and lightweight. OWIN is motivated by the development of web frameworks in other coding languages such Node.js for JavaScript, Rack for Ruby, and WSGI for Phyton. All these web frameworks are designed to be fast, simple and they enable the development of web applications in a modular way. In contrast, prior to OWIN, every .NET web application required a dependency on >System.Web.dll, which tightly coupled with Microsoft's IIS (Internet Information Services). This meant that .NET web applications came with a number of application component stacks from IIS, whether they were actually required or not. This made .NET web applications, as a whole, heavier and they performed slower than their counterparts in other coding languages in many benchmarks OWIN was initiated by members of Microsoft's communities; such as C#, F# and dynamic programming communities. Thus, the specification is largely influenced by the programming paradigm of those communities.


Banking on digitalisation: A transformation journey enabled by technology, powered by humans

Banks are now staring at the massive challenge of continuing their digital investments in a cost constrained environment. Getting their workforce ready to develop the technologies, while continuing to deliver value to their customers is another issue. At the same time, they are competing with new digital banks that will undoubtedly come in with newer technology built on modern architecture without the legacy debt. However, there are industry players that may have cracked the code to successful digitalisation. I know of incumbent banks as well as digital banks developing world-class digital capabilities at lower costs, while training their people to make full use of their new digital investments. Recently the finance function of a leading global universal bank adopted a “citizen-led” digital transformation, training 300+ “citizen” developers who identified 200+ new use cases resulting in an annual run rate cost reduction of $15 million. This case study highlights the importance of engaging and upskilling your workforce while contributing to bottom line benefits. Over the last two decades, technology by itself has evolved and now has the ability to transform whole businesses in the financial services sector, similar to its impact on other industries such as retail and media. Traditionally, for banks, technology was a support function enabling product and customer strategies.


Google details RigL algorithm for building more efficient neural networks

Google researchers put RigL to the test in an experiment involving an image processing model. It was given the task of analyzing images containing different characters. During the model training phase, RigL determined that the AI only needs to analyze the character in the foreground of each image and can skip processing the background pixels, which don’t contain any useful information. The algorithm then removed connections used for processing background pixels and added new, more efficient ones in their places.  “The algorithm identifies which neurons should be active during training, which helps the optimization process to utilize the most relevant connections and results in better sparse solutions,” Google research engineers Utku Evci and Pablo Samuel Castro explained in a blog post. “At regularly spaced intervals we remove a fraction of the connections.” There are other methods besides RigL that attempt to compress neural networks by removing redundant connections. However, those methods have the downside of significantly reducing the compressed model’s accuracy, which limits their practical application. Google says RigL achieves higher accuracy than three of the most sophisticated alternative techniques while also “consistently requiring fewer FLOPs (and memory footprint) than the other methods.”


IBM, AI And The Battle For Cybersecurity

While older adversarial attack patterns were algorithmic and easier to detect, new attacks add AI features such as natural language processing and a more natural human computer interaction to make malware more evasive, pervasive and scalable. The malware will use AI to keep changing form in order to be more evasive and fool common detection techniques and rules. Automated techniques can make the malware more scalable and combined with AI can move laterally through an enterprise and attack targets without human intervention. The use of AI in cybersecurity attacks will likely become more pervasive. Better spam can be crafted that avoids detection or personalized to a specific target as a form of spear phishing attack by using natural language processing to craft more human like messages. In addition, malware can be smart enough to understand when it is in a honeypot or sandbox and will avoid malicious execution to look more benign and not tip off security defenses. Adversarial AI attacks the human element with the use of AI augmented chatbots to disguise the attack with human-like emulation. This can escalate to the point where AI powered voice synthesis can fool people into believing that they’re dealing with a real human within their organization.


'We built two data centers in the middle of the pandemic'

With a substantial proportion of chips and components coming from the Wuhan region in China, supply chains were already facing delays. After negotiation with suppliers, Harvey's team managed to procure the right equipment on time, air-freighting components to the island from the UK mainland instead of using ferry services as usual. As the state of Guernsey started restricting travel, a local Agilisys team was then designated to pick up the data centers' build. The team's head of IT services Shona Leavey remembers juggling the requirements for the build, while also setting up civil servants with laptops to make sure the state could continue to deliver public services, even remotely. "We were rolling out Teams to civil servants, and at the same time had some of the team working on the actual data center build," Leavey tells ZDNet. "Any concept of a typical nine-to-five went out the window." Given the timeline for the build, it became evident that some engineers would have to go into the data centers to set up the equipment during the early months of summer. That meant the Agilisys team started a long, thorough, health and safety assessment.


Deepfake Detection Poses Problematic Technology Race

The problem is well known among researchers. Take Microsoft's Sept. 1 announcement of a tool designed to help detect deepfake videos. The Microsoft Video Authenticator detects possible deepfakes by finding the boundary between inserted images and the original video, providing a score for the video as it plays. While the technology is being released as a way to detect issues during the election cycle, Microsoft warned that disinformation groups will quickly adapt. "The fact that [the images are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology," said Tom Burt, corporate vice president of customer security and trust, and Eric Horvitz, chief scientific officer, in a blog post describing the technology. "However, in the short run, such as the upcoming US election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes." Microsoft is not alone in considering current deepfake detection technology as a temporary fix. In its Deep Fake Detection Challenge (DFC) in early summer, Facebook found the winning algorithm only accurately detected fake videos about two-thirds of the time.


Deliver Faster by Killing the Test Column

Instead of testers simply picking work out of this column and working on it till it’s done, they should work with the team to help them understand how they approach testing, the types of things they are looking for and also finding during testing. Doing this with a handful of tasks is likely to help them identify some key themes within their work. For example, are there similar root causes such as usability or accessibility issues, or some hardware/software combination that always results in a bug? Is there something the devs could look out for while making the changes? These themes can be used to create a backlog of tasks that the team can begin to tackle to see if they can be addressed earlier on in the development life cycle. By focusing on the process and not the people, it makes it easier to talk about what testers are doing, how developers and testers could mitigate this work earlier on in the life cycle, and begins to be the seeds of the continuous improvement programme. Leadership in this process is very important. Leaders need to help testers feel comfortable that they are not being targeted as the "problem" within the team, but are actually the solution in educating the team in what risks they are looking for when testing.


Mitigating Cyber-Risk While We're (Still) Working from Home

At home, most folks use a router provided by their Internet service provider. The home router has a firewall and NAT functionality so your family can safely connect out to your favorite websites, and those websites can send the data you asked for back to you. However, with most employees now working at home, enterprise-grade firewalls at the edge of corporate networks are no longer protecting them or providing the needed visibility for IT to help keep the corporate users safe. That's where having an endpoint security solution that can provide visibility, segment and limit access between different internal networks and laptop devices can come in handy. With CISOs, government employees, and business executives sharing home networks with their 15-year-old gamers and TikTok addicts, it's imperative to extend the principles of least privilege to the systems with important data inside the home network. Meaning that even if a bad actor gains access to your kid's network, your laptop and organization's internal assets stay in the clear. When it comes to proactively protecting against cyber threats, segmentation is one of the best ways to ensure that bad actors stay contained when they breach the perimeter. Because, let's be honest, it's bound to happen.



Quote for the day:

"Challenges are what make life interesting and overcoming them is what makes life meaningful." --Joshua Marine

Daily Tech Digest - September 17, 2020

Outbound Email Errors Cause 93% Increase in Breaches

Egress CEO Tony Pepper said the problem is only going to get worse with increased remote working and higher email volumes, which create prime conditions for outbound email data breaches of a type that traditional DLP tools simply cannot handle. “Instead, organizations need intelligent technologies, like machine learning, to create a contextual understanding of individual users that spots errors such as wrong recipients, incorrect file attachments or responses to phishing emails, and alerts the user before they make a mistake,” he said. The most common breach types were replying to spear-phishing emails (80%), emails sent to the wrong recipients (80%) and sending the incorrect file attachment (80%). Speaking to Infosecurity, Egress VP of corporate marketing Dan Hoy, said businesses reported an increase in outbound emails since lockdown, “and more emails mean more risk.” He called this a numbers game which has increased risk as remote workers are more susceptible and likely to make mistakes the more they are removed from security and IT teams. According to the research, 76% of breaches were caused by “intentional exfiltration.” Hoy confirmed this is a combination of employees innocently trying to do their job and not cause harm by sending files to webmail accounts, but this does increase risk “and you cannot ignore the malicious intent.”


‘The demand for cloud computing & cybersecurity professionals is on the rise’

The COVID-19 pandemic undoubtedly has disrupted the normalcy of every company across every sector. At Clumio, our primary focus continues to be the health and well-being of our people. While tackling the situation, we also need to keep pace with our professional duties. We made the transition to remote work immediately and are in constant touch with our employees to ensure they don’t feel isolated and remain focused on their work. We are encouraging employees to follow the best practices of remote work and motivating them to spend time on their emotional, mental and physical wellbeing during this time. We conduct Zoom happy hours frequently to stay connected and have fun. As part of the session, we also celebrated a virtual babyshower of one of our colleagues recently. We had our annual summer picnic and created wonderful memories while maintaining social distance, but staying together. During this time, we have also launched the India Research and Development center in Bangalore. Our India Center will drive front-end innovation and research to build cloud solutions. India has a huge talent pool in technology, and it is only growing. We have also started virtual hiring and onboarding during the pandemic. 


AI investment to increase but challenges remain around delivering ROI

ROI on AI is still a work in progress that requires a focus on strategic change. As companies progress in AI use, they often shift their focus from automating internal employee and customer processes to delivering on strategic goals. For example, 31% of AI leaders report increased revenue, 22% greater market share, 22% new products and services, 21% faster time-to-market, 21% global expansion, 19% creation of new business models, and 14% higher shareholder value. In fact, the AI-enabled functions showing the highest returns are all fundamental to rethinking business strategies for a digital-first world: strategic planning, supply chain management, product development, and distribution and logistics. The study found that automakers are at the forefront of AI excellence, as they accelerate AI adoption to deliver on every part of their business strategy, from upgrading production processes and improving safety features to developing self-driving cars. Of the 12 industries benchmarked in the study, automotive employs the largest AI teams. With the government actively supporting AI under its Society 5.0 program, Japanese companies lead the pack in AI adoption. 


The future of .NET Standard

.NET 5 and all future versions will always support .NET Standard 2.1 and earlier. The only reason to retarget from .NET Standard to .NET 5 is to gain access to more runtime features, language features, or APIs. So, you can think of .NET 5 as .NET Standard vNext. What about new code? Should you still start with .NET Standard 2.0 or should you go straight to .NET 5? It depends. App components: If you’re using libraries to break down your application into several components, my recommendation is to use netX.Y where X.Y is the lowest number of .NET that your application (or applications) are targeting. For simplicity, you probably want all projects that make up your application to be on the same version of .NET because it means you can assume the same BCL features everywhere. Reusable libraries: If you’re building reusable libraries that you plan on shipping on NuGet, you’ll want to consider the trade-off between reach and available feature set. .NET Standard 2.0 is the highest version of .NET Standard that is supported by .NET Framework, so it will give you the most reach, while also giving you a fairly large feature set to work with. We’d generally recommend against targeting .NET Standard 1.x as it’s not worth the hassle anymore. 


Fintech sector faces "existential crisis" says McKinsey

After growing more than 25% a year since 2014, investment into the sector dropped by 11% globally and 30% in Europe in the first half of 2020, says McKinsey, citing figures from Dealroom. In July 2020, after months of Covid-19-related lockdowns in most European countries, the drop was even steeper, 18% globally and 44% in Europe, versus the previous year. "This constitutes a significant challenge for fintechs, many of which are still not profitable and have a continuous need for capital as they complete their innovation cycle: attracting new customers, refining propositions and ultimately monetizing their scale to turn a profit," states the McKinsey paper. "The Covid-19 crisis has in effect shortened the runway for many fintechs, posing an existential threat to the sector." Analyzing fundraising data for the last three years from Dealroom, the conulstancy found that as much as €5.7 billion will be needed to sustain the EU fintech sector through the second half of 2021 — a point at which some sort of economic normalcy might begin to emerge. It is not clear where these funds will come from, however. Fintechs are largely unable to access loan bailout schemes due to their pre-profit status.


Artificial Intuition: A New Generation of AI

Artificial intuition is a simple term to misjudge in light of the fact that it seems like artificial emotion and artificial empathy. Nonetheless, it varies fundamentally. Experts are taking a shot at artificial emotions so machines can mirror human behavior all the more precisely. Artificial empathy aims to distinguish a human’s perspective in real-time. Along these lines, for instance, chatbots, virtual assistants and care robots can react to people all the more properly in context. Artificial intuition is more similar to human impulse since it can quickly survey the entirety of a circumstance, including extremely inconspicuous markers of explicit movement. The fourth era of AI is artificial intuition, which empowers computers to discover threats and opportunities without being determined what to search for, similarly as human instinct permits us to settle on choices without explicitly being told on how to do so. It’s like a seasoned detective who can enter a wrongdoing scene and know immediately that something doesn’t appear to be correct or an experienced investor who can spot a coming pattern before any other person.


Attacked by ransomware? Five steps to recovery

Arguably the most challenging step for recovering from a ransomware attack is the initial awareness that something is wrong. It’s also one of the most crucial. The sooner you can detect the ransomware attack, the less data may be affected. This directly impacts how much time it will take to recover your environment. Ransomware is designed to be very hard to detect. When you see the ransom note, it may have already inflicted damage across the entire environment. Having a cybersecurity solution that can identify unusual behavior, such as abnormal file sharing, can help quickly isolate a ransomware infection and stop it before it spreads further. Abnormal file behavior detection is one of the most effective means of detecting a ransomware attack and presents with the fewest false positives when compared to signature based or network traffic-based detection. One additional method to detect a ransomware attack is to use a “signature-based” approach. The issue with this method, is it requires the ransomware to be known. If the code is available, software can be trained to look for that code. This is not recommended, however, because sophisticated attacks are using new, previously unknown forms of ransomware. 


Struggling to Secure Remote IT? 3 Lessons from the Office

To prepare for the arrival of CCPA, business leaders told us they spent an average of $81.9 million on compliance during the last 12 months. Yet despite making investments in hiring (93%), workforce training (89%), and purchasing new software or services to ensure compliance (95%), 40% still felt unprepared for the evolving regulatory landscape. Why? Because the root causes were not addressed. Perhaps their IT operations and security teams worked in silos, creating complexity and narrowing their visibility into their IT estates. Maybe their teams were completely unaware that other departments introduced their own software into the environment. Or more commonly, the organization used legacy tooling that wasn't plugged into the endpoint management or security systems of the IT teams. These are just some of the root causes that keep organizations in the dark and prone to exploits. While the transition to remote work was swift, it has presented businesses with an opportunity to face these issues head-on. As workforces continue to work remotely, CISOs and CIOs now have the chance to evaluate how they effectively manage risk in the long term, which includes running continuous risk assessments and investing in solutions that deliver rapid incident response and improved decision-making.


CTO challenges around the return to the workplace

Every CTO tells us that the digital transformation and change management programmes designed to address the relentless regulatory, competitor, innovation and customer challenges must go ahead as planned, regardless of the pandemic. You may be tackling automating end-to-end electronic trading workflows or creating mobile framework applications. Whatever the focus, hampering the journey towards electronification, firms stumble over the limitations of legacy systems; trading desks still depend on quotes, orders and trades are processed from a multitude of external trading platforms, and inconsistency, lag and gaps all result in costly errors, which are missed opportunities at best, and regulatory reporting breaches and huge fines at worst. In the quest for efficiencies, mitigation of risk, and achieving seamless and future-proofed IT architecture, firms must automate to meet their regulatory obligations and deliver client, management and regulatory transparency. And this hasn’t even touched on achieving an ambition to create end-to-end, freely flowing models of perfectly clean, ordered and well-governed data. Every CTO needs to apply extraction and visualisation layers, and mine the data for valuable insights that can be fed further upstream.


The Case for Explainable AI (XAI)

Despite the numerous benefits to developing XAI, many formidable challenges persist. A significant hurdle, particularly for those attempting to establish standards and regulations, is the fact that different users will require different levels of explainability in different contexts. Models that are deployed to effectuate decisions that directly impact human life, such as those in hospitals or military environments, will produce different needs and constraints than ones utilized in low-risk situations There are also nuances within the performance-explainability trade-off. Infrastructure and systems designers are constantly balancing the demands of competing interests. ... There are also a number of risks associated with explainable AI. Systems that produce seemingly-credible but actually-incorrect results would be difficult to detect for most consumers. Trust in AI systems can enable deception by way of those very AI systems, especially when stakeholders provide features that purport to offer explainability where they actually do not. Engineers also worry that explainability could give rise to vaster opportunities for exploitation by malicious actors. Simply put, if it is easier to understand how a model converts input into output, it is likely also easier to craft adversarial inputs that are designed to achieve specific outputs.



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham