Daily Tech Digest - September 22, 2020

How industrial AI will power the refining industry in the future

The ultimate vision for the industry is the self-optimising, autonomous plant – and the increasing deployment of artificial intelligence (AI) across the sector is bringing the reality of this ever closer. However, while refining has been an early adopter of many digital tools, the industry is yet to fully realise the potential of industrial AI. That is, in no small part, because AI and machine learning are too often looked at in isolation, rather than being combined with existing engineering capabilities – models, tools and expertise, to deliver a practical solution that effectively optimises refinery assets. ... Machine learning is used to create the model, leveraging simulation, plant or pilot plant data. The model also uses domain knowledge, including first principles and engineering constraints, to build an enriched model — without requiring the user to have deep process expertise or be an AI expert. The solutions supported by hybrid models act as a bridge between the first principles-focused world of the past and the “smart refinery” environment of the future. They are the essential catalyst helping to enable the self-optimising plant.


Microsoft's new feature uses AI to make video chat less weird

Eye Contact uses the custom artificial intelligence (AI) engine in the Surface Pro X's SQ1 SOC, so you shouldn't see any performance degradation, as much of the complex real-time computational photography is handed off to it and to the integrated GPU. Everything is handled at a device driver level, so it works with any app that uses the front-facing camera -- it doesn't matter if you're using Teams or Skype or Slack or Zoom, they all get the benefit. There's only one constraint: the Surface Pro X must be in landscape mode, as the machine learning model used in Eye Contact won't work if you hold the tablet vertically. In practice that shouldn't be much of an issue, as most video-conferencing apps assume that you're using a standard desktop monitor rather than a tablet PC, and so are optimised for landscape layouts. The question for the future is whether this machine-learning approach can be brought to other devices. Sadly it's unlikely to be a general-purpose solution for some time; it needs to be built into the camera drivers and Microsoft here has the advantage of owning both the camera software and the processor architecture in the Surface Pro X.


Digital transformation: 5 ways the pandemic forced change

Zemmel says that the evolution of the role of the CIO has been accelerated as well. He sees CIOs increasingly reporting to the CEO because they increasingly have a dual mandate. In addition to their historical operational role running the IT department, they now are also customer-facing and driving revenue. That mandate is not new for forward-looking IT organizations, but the pandemic has made other organizations hyper-aware of IT’s role in driving change quickly. CIOs are becoming a sort of “chief influencing officer who is breaking down silos and driving adoption of digital products,” Zemmel adds. Experian’s Libenson puts it this way: “The pandemic has forced us to be closer to the business than before. We had a seat at the table before. But I think we will be a better organization after this.” The various panelists gave nods to the role of technology, especially the use of data; Zemmel describes the second generation of B2B digital selling as “capturing the ‘digital exhaust’ to drive new analytic insights and using data to drive performance and create more immersive experiences.”


Diligent Engine: A Modern Cross-Platform Low-Level Graphics Library

Graphics APIs have come a long way from a small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. The next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard.  ... This article describes Diligent Engine, a light-weight cross-platform graphics API abstraction layer that is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C/C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. The full source code is available for download at GitHub and is free to use.


Supporting mobile workers everywhere

It is amazing how quickly video conferencing has been accepted as part of the daily routine. Such is the success of services like Zoom that CIOs need to reassess priorities. In a workforce where people are working from home regularly, remote access is not limited to a few, but must be available to all. Mobile access and connectivity for the mobile workforce needs to extend to employees’ homes. Traditional VPN access has scalability limitations and is inefficient when used to provide access to modern SaaS-based enterprise applications. To reach all home workers, some organisations are replacing their VPNs with SD-WANs. There is also an opportunity to revisit bring-your-own-device (BYOD) policies. If people have access to computing at home and their devices can be secured, then CIOs should question the need to push out corporate laptops to home workers. While IT departments may have traditionally deployed virtual desktop infrastructure (VDI) to stream business applications to thin client devices, desktop as a service (DaaS) is a natural choice to delivering a managed desktop environment to home workers. For those organisations that are reluctant to use DaaS in the public cloud, as Oxford University Social Sciences Division (OSSD) has found (see below), desktop software can easily be delivered in a secure and manageable way using containers.


Secure data sharing in a world concerned with privacy

Compliance costs and legal risks are prompting companies to consider an innovative data sharing method based on PETs: a new genre of technologies which can help them bridge competing privacy frameworks. PETs are a category of technologies that protect data along its lifecycle while maintaining its utility, even for advanced AI and machine learning processes. PETs allow their users to harness the benefits of big data while protecting personally identifiable information (PII) and other sensitive information, thus maintaining stringent privacy standards. One such PET playing a growing role in privacy-preserving information sharing is Homomorphic Encryption (HE), a technique regarded by many as the holy grail of data protection. HE enables multiple parties to securely collaborate on encrypted data by conducting analysis on data which remains encrypted throughout the process, never exposing personal or confidential information. Through HE, companies can derive the necessary insights from big data while protecting individuals’ personal details – and, crucially, while remaining compliant with privacy legislation because the data is never exposed.



When -- and when not -- to use cloud native security tools

Cloud native security tools like Amazon Inspector and Microsoft Azure Security Center automatically inspect the configuration of common types of cloud workloads and generate alerts when potential security problems are detected. Google Cloud Data Loss Prevention and Amazon Macie provide similar functionality for data by automatically detecting sensitive information that is not properly secured and alerting the user. To protect data even further there are tools, such as Amazon GuardDuty and Azure Advanced Threat Protection, that monitor for events that could signal security issues within cloud-based and on-premises environments. ... IT teams use services like Google Cloud Armor, AWS Web Application Firewall and Azure Firewall to configure firewalls that control network access to applications running in the cloud. Related tools provide mitigation against DDoS attacks that target cloud-based resources. ... Data stored on the major public clouds can be encrypted electively -- or is encrypted automatically by default -- using native functionality built into storage services like Amazon S3 and Azure Blob Storage. Public cloud vendors also offer cloud-based key management services, like Azure Key Vault and Google Key Management Service, for securely keeping track of encryption keys.


Four Case Studies for Implementing Real-Time APIs

Unreliable or slow performance can directly impact or even prevent the adoption of new digital services, making it difficult for a business to maximize the potential of new products and expand its offerings. Thus, it is not only crucial that an API processes calls at acceptable speeds, but it is equally important to have an API infrastructure in place that is able to route traffic to resources correctly, authenticate users, secure APIs, prioritize calls, provide proper bandwidth, and cache API responses.  Most traditional APIM solutions were made to handle traffic between servers in the data center and the client applications accessing those APIs externally (north-south traffic). They also need constant connectivity between the control plane and data plane, which requires using third-party modules, scripts, and local databases. Processing a single request creates significant overhead — and it only gets more complex when dealing with the east-west traffic associated with a distributed application.  Considering that a single transaction or request could require multiple internal API calls, the bank found it extremely difficult to deliver good user experiences to their customers.


Building the foundations of effective data protection compliance

Data protection by design and default needs to be planned within the whole system, depending on the type of data and how much data a business has. Data classification is the categorization of data according to its level of sensitivity or value, using labels. These are attached as visual markings and metadata within the file. When classification is applied the metadata ensures that the data can only be accessed or used in accordance with the rules that correspond with its label. Businesses need to mitigate attacks and employee mistakes by starting with policy - assessing who has access. Then they should select a tool that fits the policy, not the other way round; you should never be faced with selecting a tool and then having to rewrite your policy to fit it. This will then support users with automation and labelling which will enhance the downstream technology. Once data is appropriately classified, security tools such as Data Loss Prevention (DLP), policy-based email encryption, access control and data governance tools are exponentially more effective, as they can access the information provided by the classification label and metadata that tells them how data should be managed and protected.


Q&A on the Book Fail to Learn

People often fear failure because of the stakes associated with it. When we create steep punishment systems and “one-strike-you’re-out” rules, it’s only natural to be terrified of messing up. This is where we need to think more like game designers. Games encourage trial and error because the cost of starting over in a game is practically nothing. If I die playing Halo, I get to respawn and try again immediately. We need to create more “respawn” options in the rest of our lives. This is something that educators can do in their course design. But it’s also something we can encourage as managers, company leaders, or simply as members of society. The best way to do this is to start talking more about our mistakes. These are things we should be able to celebrate, laugh over, shake our collective heads at, and eventually grow from. ... If we go back to people like Dyson and Edison, you see failure-to-success ratios that reach five-thousand or even ten-thousand to one. A venture capitalist who interviewed hundreds of CEOs arrived at the same ratio for start-up companies making it big: about a 10,000:1 failure-to-success ratio. Now, we probably don’t need that many failures in every segment of our lives, but think about how far off most of us are from these numbers.



Quote for the day:

"Leaders need to be optimists. Their vision is beyond the present." -- Rudy Giuliani

Daily Tech Digest - September 21, 2020

Ransomware attacks growing in number, severity: Why experts believe it will get worse

The attackers are also changing their tactics, making it difficult to defend against them. “Those tactics include things like not just encrypting victims’ computer networks but also exfiltrating data on those victim networks,” Holmes said. “Now when victims realize they’ve become a victim of a ransomware attack; it’s not just that their data is encrypted. It’s also that they’ve lost all their — or much of their — information. The ransomware actors are holding that data at risk and telling victims if they don’t pay the ransom, they’re going to leak that data to the public.” Perhaps not surprisingly, ransomware attackers are starting to band together, Holmes said. “We’ve seen some of these ransomware actors entering into a cartel with one another. Under this cartel model, they’ll share information amongst the group members and share intelligence and share techniques. It’s a bit concerning to us as well [because] it shows a major shift change among the ransomware actors.” The ransomware cartels are forming into business-like organizations with specialization of labor, Jason Conboy of DHS’s Investigations division said. “You have your malware office… and they have effectively built skills to write that malware. You’ve got your money mules.


Cloud banking: More than just a CIO conversation

The bank of 2030 will look very different from today. Facing changing consumer expectations, emerging technologies, and alternative business models, banks need to start putting strategies in place now to help them prepare for this future. An important indicator of the shifting landscape? Cloud computing is moving to the forefront as a focus for the chief information officer, C-suite executives, and board members. Banking and capital markets leaders increasingly recognize that cloud is more than a technology; it is a destination for banks and other financial services firms to store data and applications and access advanced software applications via the internet. The leading public cloud providers offer an array of innovative products-as-a-service that can be accessed on their platforms and help banks implement business and operating models to improve revenue generation, increase customer insights, contain costs, deliver market-relevant products quickly and efficiently, and help monetize enterprise data assets. The cloud also offers a huge opportunity to synchronize the enterprise; to break down operational and data silos across risk, finance, regulatory, customer support, and more. Once massive data sets are combined in one place, the organization can apply advanced analytics for integrated insights.


Machine Learning Biases Might Define Minority Health Outcomes

Algorithmic biases present marginalized patients as more healthy, disqualifying them from the speciality care they need. ... AI bias issues are in no way new to the computer science community; however, it is unclear whether the medical community even recognizes the problem. Algorithmic-driven patient care results in less adequate care for people striving to push back against the tide of systemic racism, given that the lack of diversity in big data and automated-decision regulations has not been ameliorated by the United States federal government. In contrast, European data regulations state that machine-automation developers must use appropriate mathematical and statistical techniques to ensure both that risk of error is minimized and that discriminatory effects are prevented. Regardless of geography, it is undeniable that managing these prejudices requires careful attention to data, the use of artificial intelligence to help detect bias, and the building of diverse teams. The federal government must ensure that those behind the automation are ethically and legally obliged to ensure AI is on the side of fairness.


Ireland’s data watchdog slammed for letting adtech carry on ‘biggest breach of all time’

Responding to the ICCL submission, the DPC’s deputy commissioner Graham Doyle sent this statement: “Extensive recent updates and correspondence on this matter, including a meeting, have been provided by the DPC. The investigation has progressed and a full update on the next steps provided to the concerned party.” However in a follow up to Doyle’s remarks, Ryan told TechCrunch he has “no idea” what the DPC is referring to when it mentions a “full update”. On “next steps” he said the regulator informed him it will produce a document setting out what it believes the issues are — within four weeks of its letter, dated September 15. Ryan expressed particular concern that the DPC’s enquiry does not appear to cover security — which is the crux of the RTB complaints, since GDPR’s security principle puts an obligation on processors to ensure data is handled securely and protected against unauthorized processing or loss. ... The regulator also pointed to another related open enquiry — into adtech veteran Quantcast, also beginning in May 2019.The DPC has said the Quantcast enquiry is examining the lawful basis claimed for processing Internet users’ data for ad targeting purposes, as well as considering whether transparency and data retention obligations are being fulfilled.


Write Better Automated Tests With Cypress in Angular

Cypress is much like Protractor for Angular applications, but Cypress is much faster to run and easier to debug. Cypress not only is good at the automated end-to-end testing that is independent of our applications, but also is capable of unit testing against methods in model classes, service classes, and so on. Cypress provides a complete end-to-end testing experience in your browser where you can see your tests in an automated way. Cypress provides it's own test runner where you can do the tests locally. Cypress provides some other cool features like Time Travel, Debuggability, Real-time reloads, Automatic waiting. These brilliant features make apart this tool and we will see these in action. Enough talk. If you've already cloned this repo then you're good to go but if not then you probably have an Angular project and you just have to add Cypress as a dependency. ... The cypress package includes a desktop app and the Cypress binary. It may take a few minutes to run this command for the first time because the package installation needs to download the Cypress binary. The Cypress binary is saved in a global cache directory, so installing this package in the future will be much faster for the same version.


Connecting the dots in an uncertain world

A tendency to underestimate the unexpected — both the chance of it happening and the potential in it — is a common one. To combat this bias, instead of doing “risk management,” which tends to focus on known unknowns, you can build a muscle for planning for and finding potential in the unexpected. This will be what counts during periods of crisis and uncertainty. One way of doing this is to develop a North Star you can use to connect unexpected events to your mission. Take Kaan Terzioglu, CEO of Turkish telecom company Turkcell from 2015 to 2019. When there was an outbreak of serious violence in Turkey in 2016 following an attempted coup d’etat, Kaan and his team responded by making the Internet free for a month so people were able to communicate with those they cared about without worrying about bills. And during a regional refugee crisis, the company’s quick and proactive actions to develop an app to help Syrian refugees find crucial health and financial information in the country calmed tensions in the camps. As Turkcell has built on this initial app, adding more services, customer loyalty among the refugee population has given Turkcell a major market share among the 3 million Syrian refugees in Turkey. 


What advances in HPC and AI mean for business, and what they do not

What businesses do need to be mindful of when looking to deploy AI is that it is not a magic bullet to answering business critical questions. What’s more, and as we’ve seen in several recent unfortunate examples, is if your AI is not trained with a wide set of data, it can end up amplifying a wrong supposition, rendering the end-product useless. For example, if you’re only training your facial recognition programme with pictures of white men, you’re going to get some biased and potentially discriminatory outcomes. As with all forms of modelling, the findings you get out are only as good as the data that you put into it. There are no shortcuts around this simple fact and any organisation trying to do so will quickly become unstuck. The key take-away from these missteps is not that AI has failed or is incapable of carrying out a task; it’s that humans have, can and often do apply the technology poorly. Today, AI is transforming almost every industry and vertical, from pathology, where it’s used to detect cancer, to AI picking up the phone to answer contact centre customer queries, from water controlled indoor farming to driving autonomous vehicles in warehouses and on public roads.


Interview with Event Modeling Founder - Adam Dymitruk

The event model contains independent units of work (slices). This allows not requiring stand-ups, sprints and many other processes that are not effective with remote work. These saved hours are reinvested in working on implementing the solution. The transition to remote work is not even felt with the guidance event modeling brings. Because the patterns are prescribed, event modeling takes the guesswork out of many aspects of writing software. The design allows a fixed and flat cost curve where the complexity of the system doesn't make a project end in a "death march". Event Modeling is simple. Whether you're looking at BPMN or UML, the traditional practices for design are incredibly heavy. The main benefit is that you can get the same value for a much smaller investment of time to create a blueprint for how your current or potential solution works or will work. The goal is to sweep aside the mountain of books and get on organization going quickly. It takes 15 minutes to explain Event Modeling. The other main benefit is having a reliable plan to start implementation. Most new projects in software today head into starting development blind. An important test that the event model provides is that it shows if you are "information complete".


The 4 Top Artificial Intelligence Trends For 2021

From advancement in medical technology and standards of care, to advances in communication technology that enabled outbreaks to be spotted more quickly and lockdowns imposed. Over the next year, AI will be added to the list of technological developments that are enabling us to more effectively deal with pandemics. The growth in the amount of scientific and medical literature alone is enormous, with more than 28,000 papers published by April this year relating to Covid-19. A dedicated search engine powered by natural language processing (NLP) algorithms has already been made available, so anyone can get AI assistance when investigating this massive dataset. Work is also ongoing to develop AI solutions to help deal with the huge backlog of other medical issues, such as cancer, where treatment has been affected as resources are diverted to fight Covid-19. Over the next year, we are likely to see the accelerated adoption of AI across many other areas of healthcare, not only related to tackling viruses. By developing our ability to apply machine learning problem-solving to these massive, real-time global datasets, we will spot outbreaks more easily, track contact between infected people, enable more accurate diagnoses, and, by predicting ways that a virus might evolve in the future, develop more effective and lasting vaccinations.


New US digital identity legislation promises more secure verification

The Improving Digital Identity Act would create an Improving Digital Identity Task Force within the executive office of the president. Its mission is to establish a government-wide effort to develop secure methods for federal, state and local government agencies to validate identity attributes and support interoperable digital identity verification in both the public and private sectors. The task force would be comprised of cabinet secretaries, heads of other federal agencies, state and local government officials, congressional committee designated members, and a position appointed by the president. Additionally, the National Institute of Standards and Technology (NIST) would develop a standards framework for digital identity verification to guide federal, state and local governments in selecting their digital identity solutions. NIST would have one year to publish a final version of the framework. The legislation requires the task force to publish a report with recommendations on research and development in systems that enable digital identity verification. Upon its completion and with consent of the individual, the framework will enable government agencies to securely vouch for its citizens in real-time when online.



Quote for the day:

"Power should be reserved for weightlifting and boats, and leadership really involves responsibility." -- Herb Kelleher

Daily Tech Digest - September 20, 2020

Why disaster recovery preparation is even more important during a pandemic

From a cyber perspective, disaster recovery during a pandemic raises new challenges as well. The rapid expansion of remote work introduces new vulnerabilities. Many organizations have relaxed perimeter security controls to allow remote connectivity, introducing new threat vectors that threat actors can exploit to gain access to networks. Lately, many of these attacks have focused on ransomware and data destruction, which encrypt data and often corrupt critical backup systems, rendering existing disaster recovery plans unusable. An "all hands on deck" approach to manual recovery is often the only response to these conditions. Unfortunately, social distancing protocols and remote work arrangements can make those manual recovery efforts an impossibility. ... IT disaster recovery generally falls into one of two categories: A natural disaster event (earthquake, flood, etc.) or a system failure (such as failures in hardware, software or electrical). This year, actual DR responses we have witnessed have included issues with local or regional power outages, or power infrastructure issues. We have seen this across multiple industries including financial services with outages during peak customer windows and prolonged recovery times.


Iranian hacker group developed Android malware to steal 2FA SMS codes

In a report published today, Check Point researchers said they also discovered a potent Android backdoor developed by the group. The backdoor could steal the victim's contacts list and SMS messages, silently record the victim via the microphone, and show phishing pages. But the backdoor also contained routines that were specifically focused on stealing 2FA codes. Check Point said the malware would intercept and forward to the attackers any SMS message that contained the "G-" string, usually employed to prefix 2FA codes for Google accounts sent to users via SMS. The thinking is that Rampant Kitten operators would use the Android trojan to show a Google phishing page, capture the user's account credentials, and then access the victim's account. If the victim had 2FA enabled, the malware's 2FA SMS-intercepting functionality would silently send copies of the 2FA SMS code to the attackers, allowing them to bypass 2FA. But that was not it. Check Point also found evidence that the malware would also automatically forwarding all incoming SMS messages from Telegram and other social network apps. These types of messages also contain 2FA codes, and it's very likely that the group was using this functionality to bypass 2FA on more than Google accounts.


Clean Coders: Why India isn’t on the List

A more vexing element that drives the problem — a majority of the Indian software companies look at software purely as a business. It’s mostly about getting the deliverables ready on the quoted time and almost never about striving for quality results. Consequently, the team treats coding as a task to be ticked off with numbers rather than a task requiring quality — something that would actually educate folks to avoid future mistakes. It’s a chain reaction, really. When the organization itself does not prioritize clean quality coding when a product is being developed, most coders lose the urge to be curious about better practices and approaches since they have to direct all their efforts into meeting deadlines. Even to this day, many skilled professionals in the industry lack the ability to convey their ideas and pain points effectively during client meetings or within the team. Organizations need to establish the fact that coding is only one aspect of the job and that communication is equally important. Especially in the service sector, when we are constantly collaborating on large-scale projects, it’s absolutely crucial for clients and internal teams to be on the same page.


6 big data blunders businesses should avoid

Owing to modern technologies, all trades, irrespective of size, have access to granular and rich data that is based on their operations and clients. The major hurdle in this is dealing with a massive quantity of data that are both challenging to maintain and costly to manage. Despite the presence of appropriate tools, dealing with such data is a cumbersome activity. Errors are a frequent presence with the layers of complexity involved in dealing with Big Data. However, Big Data holds diverse leverages for businesses. ... Thus, Big Data becomes the defining leverage for innovative enterprises to gain an edge over their competitors. The usage of these data is sure to exceed 274.3 billion by 2022 globally with each individual generating approximately 1.7 megabytes of information per second.  With such leverages in point, can you really afford to make mistakes with regard to blunders regarding Big Data? So, here are some big data blunders that businesses need to avoid to harness its full capabilities and enjoy the leverages that it brings.


Tracking global cybercrime activity and the impact on the digital economy

The EMEA region saw lower overall attack rates in comparison to most other global regions from January through June 2020. This is due to a high volume of trusted login transactions across relatively mature mobile apps. The attack patterns in EMEA were also more benign and had less volatility and fewer spikes in attack rates. However, there are some notable exceptions. Desktop transactions conducted from EMEA had a higher attack rate than the global average and automated bot attack volume grew 45% year over year. The UK originates the highest volume of human-initiated cyberattacks in EMEA, with Germany and France second and third in the region. The UK is also the second largest contributor to global bot attacks behind the U.S. One example of a UK banking fraud network saw more than $17 million exposed to fraud across 10 financial services organizations. This network alone consisted of 7,800 devices, 5,200 email addresses and 1,000 telephone numbers. The overall human-initiated attack rate fell through the first half of 2020, showing a 33% decline year over year. The breakdown by sector shows a 23% decline in financial services and a 55% decline in e-commerce attack rates.


Load Testing APIs and Websites with Gatling: It’s Never Too Late to Get Started

If it is your first time load testing, whether you already know the target user behavior or not, you should start with a capacity test. Stress testing is useful but analyzing the metrics is really tricky under such a load. Since everything is failing at the same time, it makes the task difficult, even impossible. Capacity testing offers the luxury to go slowly to failure, which is more comfortable for the first analysis. To get started, just run a capacity test that makes your application crash as soon as possible. You only need to add complexity to the scenario when everything seems to run smoothly. ... If an average can give you a quick overview of what happened in a run, it will hide under the rug all the things you actually want to look at. This is where percentiles come in handy. Think of it this way: if the average response time is some amount of milliseconds, how does the experience feel in the worst case for 1% of your user base? Better or worse? How does it feel for 0.1% of your users? And so on, getting closer and closer to zero. The higher the amount of users and requests, the closer you’ll need to get to zero in order to study extreme behaviors. 


Who Should Own Data Governance?

Many organizations position data governance under the Chief Financial Officer (CFO). Other organizations position data governance under the Chief Risk Officer (CRO) or the Chief Operational Officer (COO). In addition, some organizations position data governance under the Chief Privacy Officer (CPO) or the Chief Information Security Officer (CISO). These days there are so many C-levels. Placement of data governance under any one of these C-level people is never wrong. Data governance must reside somewhere and having a C-level person as your Executive Sponsor is always a good thing. In fact, many organizations state that senior leadership’s support, sponsorship and understanding of data governance is the number one best practice for starting and sustaining their program. Having a C-level person as your Executive Sponsor often dictates where data governance will reside in the organization. Is it better for data governance to be placed in Finance and report through the CFO than it is to have it reside in Operations and report through the COO? The answer to that question is, “It depends.” It depends on the interest and ability of that person and that part of the organization to provide for the proper level of capacity in terms of resources to operationalize and engage the organization.


Why Are Some Cybersecurity Professionals Not Finding Jobs?

Simply stated, these good people cannot get hired in a cyber job. Going much further, they argue that select organizations (who discuss millions of unfilled jobs) are pushing their own training agendas, certifications offered, want to boost certain company stock prices or have other reasons to encourage this “abundance of cyber jobs remain vacant” narrative, even though it is not true, in their opinion. I want to be clear up front that, I disagree with this narrative. I do believe that many (perhaps millions but we can argue the numbers in another blog) global cybersecurity job vacancies do exist. Nevertheless, I truly sympathize with these people who disagree, and I want to try and help as many as I can find employment. I also want hiring managers to set proper expectations.  In addition to my blogs and articles, I have personally mentored and helped dozens of people find cyber jobs, from high school students to new college graduates to CISOs and CSOs. (Note: this is not my "day job" but one way I try to give back to the security community – just like so many others are doing as well.) I also champion ways that government CISOs struggle in this area, and how tech leaders can find more cyber talent.


Seven Steps to Realizing the Value of Modern Apps

With organizations running a multitude of environments to meet the demands of their applications, each with unique technological requirements, finding the platform isn’t the only challenge. What’s hard is that the development and management is more complex than ever before, with IT and developers navigating traditional apps, cloud- native, SaaS, services and on-prem, for example. Here’s where you need a common ground between IT teams, Lines of Business and developers – where having a single digital platform is critical – to remove the potential for silos springing up, enable the better deployment of resources, and provide a consistent approach to managing applications, infrastructure and business needs together. It’s about creating one, common platform to ‘run all the things’. One, software-defined digital foundation that provides the platform – and choice of where to run IT – to drive business value, create the best environment for developers and help IT effectively manage existing and new technology via any cloud for any application on any device with intrinsic security. One platform that can deliver all apps, enabling developers to use the latest development methodologies and container technologies for faster time to production. 


4 measures to counteract risk in financial services

Financial services regulators across jurisdictions have identified concentration risk as a factor to consider in assessing risk in outsourcing. That risk has two components (i) micro-risk where reliance on a single provider for core operations may present an undue risk of operations if there is a single point of failure and (ii) macro-risk where reliance on financial firms within the ecosystem are so reliant on a vendor that a single point of failure risks causing a broad systemic risk to the operations of the financial services sector. Notably, this risk is not unique to cloud services and, as the Bank of England commented in its Consultation Paper on Outsourcing and Third Party Risk Management, “a small number of third parties have traditionally dominated the provision of certain functions, products, or services to firms, such as cash machines or IT mainframes.” In other words, the issue of concentration risk is not net new but has been a feature within the financial services industry for decades. While cloud remains relatively nascent compared to entrenched providers of legacy systems, like the mainframe, its increasing adoption means that financial institutions must account for, and mitigate against, micro-risk issues of concentration in use of cloud providers.



Quote for the day:

“When we are no longer able to change a situation, we are challenged to change ourselves.” -- Viktor E. Frankl