Daily Tech Digest - August 31, 2020

How the DevOps model will build the new remote workforce

Most importantly, the humans managing systems ultimately determined the company's capacity to adapt to the pandemic. "We recognized that … the systems may need to scale, and we may need to make changes to meet a new global demand, [but] that is much different than how our peers, the people we care about and work with, are going to be impacted by this," Heckman said. Thus, the SRE team's role was not just to watch systems and shore up their reliability, but also to manage communications with other employees, Heckman said, "not only so they had the current context of what we were thinking, suggesting and where we were headed, but also to give them some confidence that the system around them would be fine." Similar principles must be applied to manage the human impact of a longer-term shift to remote work, said Jaime Woo, co-founder of Incident Labs, an SRE training and consulting firm, in a separate presentation at the SRE from Home event. "The answer is not 'just be stronger,'" for humans during such a transition, any more than it is for individual components in a distributed computing system under unusual traffic load, Woo said.

From Defense to Offense: Giving CISOs Their Due

CISOs are now in a position where they must — somehow — reinvent how they work and how they are perceived within their organizations. Historically, they have been the company's risk-averse first line of defense against cyberattacks, and have been viewed as such. But this state of affairs needs to evolve. "CISOs cannot afford to be seen as blockers of innovation; they must be problem-solvers," says Kris Lovejoy, EY Global Advisory Cybersecurity Leader, in EY's report. "The way we've organized cybersecurity is as a backward-looking function, when it is capable of being a forward-looking, value-added function. When cybersecurity speaks the language of business, it takes that critical first step of both hearing and being understood. It starts to demonstrate value because it can directly tie business drivers to what cybersecurity is doing to enable them, justifying its spend and effectiveness." But do current CISOs have the right skills and experience to work in this new way and serve in a more proactive and forward-thinking role? That's an open question, and the answer will probably demand a new breed of CISO whose job is not driven mainly by threat abatement and compliance.

Want an IT job? Look outside the tech industry

It's always been true that most software was written for use, not sale. Companies might buy its ERP software from SAP and office productivity software from Microsoft, but they were writing all sorts of software to manage their supply chain, take care of employees, and more. What wasn't true then, but is definitely true now, is just how much of that software spend is now focused on company-defining initiatives, rather than back-office software meant to keep the lights on. Small wonder, then, that in the past year companies have posted nearly one million jobs in the US, according to the Burning Glass data, which scours job postings. That number is expected to increase by more than 30% over the next few years, with non-tech IT jobs set to boom at a 50% faster clip than IT jobs within tech. As for who is hiring, though tech companies top the list (arguably one of them isn't really a tech company), the rest of the top 10 are decidedly non-tech. Digging into the Burning Glass report, and moving beyond software developer jobs, specifically, and into the broader category of IT, generally, Professional Services, Manufacturing, and Financial Services account for roughly half of all IT openings outside tech.

Data protection critical to keeping customers coming back for more

Despite the growing advancements on the data protection front, 51 percent of consumers surveyed said they are still not comfortable sharing their personal information. One-third of respondents said they are most concerned about it being stolen in a breach, with another 26 percent worried about it being shared with a third party. In the midst of the growing pandemic, COVID-19 tracking, tracing, containment and research depends on citizens opting in to share their personal data. However, the research shows that consumers are not interested in sharing their information. When specifically asked about sharing healthcare data, only 27 percent would share health data for healthcare advancements and research. Another 21 percent of consumers surveyed would share health data for contact tracing purposes. As data becomes more valuable to combat the pandemic, companies must provide consumers with more background and reasoning as to why they’re collecting data – and how they plan to protect it. ... As the debate grows louder across the nation, 73 percent of consumers think that there should be more government oversight at the federal and/or state/local levels.

The power of open source during a pandemic

The world needs to shift the way it's approaching problems and continue locating solutions the open source way. Individually, this might mean becoming connection-oriented problem-solvers. We need people able to think communally, communicate asynchronously, and consider innovation iteratively. We're seeing that organizations would need to consider technologists less as tradespeople who build systems and more as experts in global collaboration, people who can make future-proof decisions on everything from data structures to personnel processes. Now is the time to start building new paradigms for global collaboration and find unifying solutions to our shared problems, and one key element to doing this successfully is our ability to work together across sectors. A global pandemic needs the public sector, the private sector, and the nonprofit world to collaborate, each bringing its own expertise to a common, shared platform. ... The private sector plays a key role in building this method of collaboration by building a robust social innovation strategy that aligns with the collective problems affecting us all. This pandemic is a great example of collective global issues affecting every business around the world, and this is the reason why shared platforms and effective collaboration will be key moving forward.

Why Digital Transformation Always Needs To Start With Customers First

A fascinating point regarding Deloitte Insights’ research is the correlation it uncovered between an organization’s digital transformation maturity and the benefits they gain in efficiency, revenue growth, product/service quality, customer satisfaction and employee engagement. They found a hierarchy of pivots successful enterprises make to keep pursuing more agile, adaptive organizational structures combined with business model adaptability, all driven by customer-driven innovation. The most digitally mature organizations can adopt new frameworks that prioritize market responsiveness, customer-centricity and have analytics and data-driven culture with actionable insights embedded in their DNA. The two highest-payoff areas for accelerating digital maturity and achieving its many benefits are mastering data and creating more intelligent workflows. Deloitte Insights’ research team looked at the seven most effective digital pivots enterprises can make to become more digitally mature. The pivots that paid off the best as measured by revenue, margin, customer satisfaction, product/service quality and employee engagement combined data mastery and improving intelligent workflows.

A searingly honest tech CEO tells the truth about working from home

Morris believes that everyone has what she calls their "Covid Acceptance Curve." But no two employees' curves are likely to be alike. "Many possible solutions for one employee are actually counter-indicated for others," she says. "Think of your team as an overlapping series of waves, each strand representing a person and their curve. You could try to slot in a single solution across 'strands,' but it will inevitably miss so many marks, reaching people too late, too early or with something that isn't even relevant to them." Some might imagine that one of the particular failings of tech leadership is the temptation to treat all employees with one broad free lunch. There, that should please everyone. Now, says Morris of her employees: "Some are experimenting with how to juggle work and homeschooling, some are struggling with crippling isolation, some have been impacted by Covid personally, others are facing anxiety of so many kinds." I wonder whether it was always this way, but leaders didn't care so much. Each employee has always been burdened with their own practical and emotional issues not directly related to work. Now, though, it's the physical distance and the constant, lonely staring at screens that intensifies difficulties -- and leadership's ability to anticipate or even understand them.

How does open source thrive in a cloud world? "Incredible amounts of trust," says a Grafana VC

Cloud gives enterprises a "get-out-of-the-burden-of-maintaining-open-source free" card, but savvy engineering teams still want open source so as to "not lock themselves in and to not create a bunch of technical debt." How does open source help to alleviate lock-in? Engineering teams can build "a very modular system so that they can swap in and out components as technology improves," something that is "very hard to do with the turnkey cloud service." That's the technical side of open source, but there's more to it than that, Gupta noted. Referring to how Elastic ate away at Splunk's installed base, Gupta said, "The biggest reason...is there is a deep amount of developer love and appreciation and almost like an addiction to the [open source] product." This developer love is deeper than just liking to use a given technology: "You develop [it] by being able to feel it and understand the open source technology and be part of a community." Is it impossible to achieve this community love with a proprietary product? No, but "It's a lot easier to build if you're open source." He went on, "When you're a black box cloud service and you have an API, that's great. People like Twilio, but do they love it?"

Is Low-Code Or No-Code Development Suitable For Your Startup App Idea?

Speed and adaptability are key ingredients in every product development phase of a startup. Assume it will take you four months to create and launch the first version of your product. You spoke with potential customers, gathered, and implemented their feedback to create the best solution you could build based on the information you have. If those potential customers need your solution, they will be looking forward to it. And if they committed financially, they’re going to be even more eager to use it. The truth is that in a competitive market where buyers have many options, eagerness and patience are two different things. The customers may wish to use your product sooner than later but they will not wait for it. Even if they don’t have better options today, they will figure out an alternative solution. Now assume you launched your product, served the first customers and gathered some more critical feedback. Your customers will not wait months for those changes, no matter how important your product is for them. Speed and adaptability can make or break a startup.

Tackle.io's Experience With Monitoring Tools That Support Serverless

Tackle runs microservices such as managed containers on AWS Fargate, deploys its front end on Amazon CloudFront, and uses Amazon DynamoDB for its database, Wood says. “We’ve spent a lot of time making sure that our architecture is something scalable and allows us to provide value to our customers without interruption,” he says. Tackle’s clientele includes software and SaaS companies such as GitHub, PagerDuty, New Relic, and HashiCorp. Despite the benefits, Woods says running serverless can introduce such issues as trying to find obscure failures with APIs. “Once you adopt serverless, you’ll have a chain of Lambda functions calling each other,” he says. “You know that somewhere in that process was an error. Tracing it is really difficult with the tools provided out of the box.” Before adopting Sentry, Tackle spent a lot of engineering hours trying to discover the root cause of problems, Woods says, such as why a notification was not sent to a customer. “It might take half a day to get an answer on that.” Tackle adopted Sentry’s technology initially to get back traces on such errors. Woods says his company soon discovered Sentry also sends alerts for failures Tackle was not aware of in its web app.

Quote for the day:

"You can't lead anyone else further than you have gone yourself." -- Gene Mauch

Daily Tech Digest - August 30, 2020

'Lemon Duck' Cryptominer Aims for Linux Systems

The malware uses the infected computer to replicate itself in a network and then uses the contacts from the victim's Microsoft Outlook account to send additional spam emails to more potential victims, the report notes. "People are more likely to trust messages from people they know than from random internet accounts," Rajesh Nataraj, a researcher with Sophos Labs, notes. The malware contains code that generates email messages with dynamically added malicious files and subject lines pulled up from its database with phrases such as: "The Truth of COVID-19," "COVID-19 nCov Special info WHO" or "HEALTH ADVISORY: CORONA VIRUS," according to the report. Researchers found that Lemon Duck malware exploits the SMBGhost vulnerability found in versions 1902 and 1909 of the Windows 10 operating system. Exploiting this vulnerability allows for remote code execution. Microsoft fixed this bug in March, but unpatched systems remain at risk. The code used in Lemon Duck also leverages the EternalBlue vulnerability in Windows to help the malware spread laterally through enterprise networks.

Can AI Reimagine City Configuration and Automate Urban Planning?

While the concept of AI-enabled automated urban planning is appealing, the researchers quickly encountered three challenges: how to quantify a land-use configuration plan, how to develop a machine learning framework that can learn the good and the bad of existing urban communities in terms of land-use configuration policies, and how to evaluate the quality of the system’s generated land-use configurations. The researchers began by formulating the automated urban planning problem as a learning task on the configuration of land-use given surrounding spatial contexts. They defined land-use configuration as a longitude-latitude-channel tensor with the goal of developing a framework that could automatically generate such tensors for unplanned areas. The team developed an adversarial learning framework called LUCGAN to generate effective land-use configurations by drawing on urban geography, human mobility, and socioeconomic data. LUCGAN is designed to first learn representations of the contexts of a virgin area and then generate an ideal land-use configuration solution for the area.

AT&T Waxes 5G Edge for Enterprise With IBM

As enterprises increasingly shift to a hybrid-cloud model, IBM is working with AT&T and other operators to allow businesses to deploy applications or workloads wherever they see fit, Canepa said. “That includes now what we’re highlighting here, the mobile edge environment that comes with this, the emerging 5G world.” Because enterprises are no longer restricted to a single cloud architecture on premises, they’re gaining access to a larger pool of potential innovation sources, he explained. This extends to mobile network operators’ infrastructure as well. “Up until this point, the networks inside the telcos were very kind of structured environments, hardwired, specialized equipment that was really good at what it did, but did a fairly limited set of things,” Canepa said. “What we’re evolving to now is truly a hybrid-cloud environment where that network itself becomes a platform. And then the ability to extend that platform to the edge creates a whole new opportunity to create new insights as a service, new applications, and solutions that can be deployed in that environment.”

Databricks Delta Lake — Database on top of a Data Lake

The most challenging was the lack of database like transactions in Big Data frameworks. To cover for this missing functionality we had to develop several routines the performed the necessary checks and measures. However, the process was cumbersome, time-consuming and frankly error-prone. Another issue that use to keep me awake at night was the dreaded Change Data Capture (CDC). Databases have a convenient way of updating records and showing the latest state of the record to the user. On the other hand in Big Data we ingest data and store them as files. Therefore, the daily delta ingestion may contain a combination of newly inserted, updated or deleted data. This means we end up storing the same row multiple times in the Data Lake. ... Developed by Databricks, Delta Lake brings ACID transaction support for your data lakes for both batch and streaming operations. Delta Lake is an open-source storage layer for big data workloads over HDFS, AWS S3, Azure Data Lake Storage or Google Cloud Storage. Delta Lake packs in a lot of cool features useful for Data Engineers.

Developing a scaling strategy for IoT

“One of the most often overlooked or under budgeted issues of IoT scaling is not the initial build out of the system which is typically well planned for, but the long-term maintenance and support of what can quickly become a huge network of devices that are often deployed in difficult to reach locations,” he said. “That complexity requires a resilient network to ensure that all of these IoT devices, connected via an aggregation point, can be securely managed and updated to extend their lifespan. Where edge compute is necessary due to the density of connected IoT devices, it is also advisable to provide scalable, secure and highly reliable remote management for all the IoT network infrastructure that provides a fast and predictable way to recover from failures. “An independent management network should provide a secure alternate access path, including the ability to quickly re-deploy any software and or configs automatically onto connected equipment if they need to be re-built, ideally without having to send an engineer to site. In general networking terms, it is very important to ensure that the IoT gateways and edge compute equipment stack is actively monitored and that it is designed with resiliency in mind.”

Creating The Vision For Data Governance

The first step in every successful data governance effort is the establishment of a common vision and mission for data and its governance across the enterprise. The vision articulates the state the organization wishes to achieve with data, and how data governance will foster reaching that state. Through the skills of a specialist in data governance and using the techniques of facilitation, the senior business team develops the enterprise’s vision for data and its governance. All of the subsequent activities of any data governance effort should be formed by this vision. Visioning offers the widest possible participation for developing a long-range plan, especially in enterprise-oriented areas such as data governance. It is democratic in its search for disparate opinions from all stakeholders and directly involves a cross-section of constituents from the enterprise. Developing a vision helps avoid piecemeal and reactionary approaches to addressing problems. It accounts for the relationship between issues, and how one problem’s solution may generate other problems or have an impact on another area of the enterprise. Developing a vision at the enterprise level allows the organization to create a holistic approach to setting goals that will enable the it to realize the vision.

Google Announces a New, More Services-Based Architecture Called Runner V2 to Dataflow

Runner V2 has a more efficient and portable worker architecture rewritten in C++, which is based on Apache Beam's new portability framework. Moreover, Google packaged this framework together with Dataflow Shuffle for batch jobs and Streaming Engine for streaming jobs, allowing them to provide a standard feature set from now on across all language-specific SDKs, as well as share bug fixes and performance improvements. The critical component in the architecture is the worker Virtual Machines (VM), which run the entire pipeline and have access to the various SDKs.... If features or transforms are missing for a given language, they must be duplicated across various SDKs to ensure parity; otherwise, there will be gaps in feature coverage and newer SDKs like Apache Beam Go SDK will support fewer features and exhibit inferior performance characteristics for some scenarios. Currently, Dataflow Runner v2 is available with Python streaming pipelines and Google recommends developers to test the new Runner out with current non-production workloads before enabling it by default on all new pipelines.

DOJ Seeks to Recover Stolen Cryptocurrency

The cryptocurrency stolen from the two exchanges was later traded for other types of virtual currency, such as bitcoin and tether, to launder the funds and obscure its transaction path, the Justice Department says. The civil lawsuit relates to a criminal case that the Justice Department brought against two Chinese nationals for their alleged role in laundering $100 million in cryptocurrency stolen from exchanges by North Korean hackers in 2018. The two suspects, Tian Yinyin, and Li Jiadong, are each charged with money laundering conspiracy and operating an unlicensed money transmitting business. The two also face sanctions from the U.S. Treasury Department. U.S. law enforcement officials and intelligence agencies, including the Cybersecurity and Infrastructure Security Agency, believe these types of crypto heists are carried out by the Lazarus Group, a hacking group collective also known as Hidden Cobra. Earlier this week, CISA, the FBI and the U.S. Cyber Command warned of an uptick in bank heists and cryptocurrency thefts since February by a subgroup of the Lazarus Group called BeagleBoyz

The increasing importance of data management

The goal of data management is to facilitate a holistic view of data and enable users to access and derive optimal value from it—both data in motion and at rest. Along with other data management solutions, DataOps leads to measurably better business outcomes: boosted customer loyalty, revenue, profit, and other benefits. The trouble with achieving these goals lies in part in businesses not understanding how to translate the information they hold into actionable outcomes. Once a business has toiled all the information it holds to unearth valuable insights, they can then enact changes or implement efficiencies to yield returns. ... Data security is consistently rated among the highest concerns and priorities of IT management and business leaders alike. But we can’t say that technology is always the answer in ensuring that data is securely and safely stored. A key challenge is getting alignment across organizations on the classification of data by risk and on how data should be stored and protected. That makes security a human issue; the tech is often easy. Two thirds of survey respondents report insufficient data security, making data security an essential element of any discussion of efficient data management.

What Companies are Disclosing About Cybersecurity Risk and Oversight

More boards are assigning cybersecurity oversight responsibilities to a committee. Eighty-seven percent of companies this year have charged at least one board-level committee with cybersecurity oversight, up from 82% last year and 74% in 2018. Audit committees remain the primary choice for those responsibilities. This year 67% of boards assigned cybersecurity oversight to the audit committee, up from 62% in 2019 and 59% in 2018. Last year we observed a significant increase in boards assigning cybersecurity oversight to non-audit committees, most often risk or technology committees, (28% in 2019 up from 20% in 2018), but that percentage dropped this year (26% in 2020). A minority of boards, 7% overall, assigned cyber responsibilities to both the audit and a non-audit committee. Among the boards assigning cybersecurity oversight responsibilities to the audit committee, nearly two-thirds (65%) formalize those responsibilities in the audit committee charter. Among the boards assigning such responsibilities to non-audit committees, most (85%) include those responsibilities in the charter.
Identification of director skills and expertise

Quote for the day:

"For true success ask yourself these four questions: Why? Why not? Why not me? Why not now?" -- James Allen

Daily Tech Digest - August 29, 2020

Banks aren’t as stupid as enterprise AI and fintech entrepreneurs think

First, banks have something most technologists don’t have enough of: Banks have domain expertise. Technologists tend to discount the exchange value of domain knowledge. And that’s a mistake. So much abstract technology, without critical discussion, deep product management alignment and crisp, clear and business-usefulness, makes too much technology abstract from the material value it seeks to create. Second, banks are not reluctant to buy because they don’t value enterprise artificial intelligence and other fintech. They’re reluctant because they value it too much. They know enterprise AI gives a competitive edge, so why should they get it from the same platform everyone else is attached to, drawing from the same data lake? Competitiveness, differentiation, alpha, risk transparency and operational productivity will be defined by how highly productive, high-performance cognitive tools are deployed at scale in the incredibly near future. The combination of NLP, ML, AI and cloud will accelerate competitive ideation in order of magnitude. The question is, how do you own the key elements of competitiveness? It’s a tough question for many enterprises to answer.

Artificial Intelligence (AI) strategy: 3 tips for crafting yours

AI can drive value only if it is applied to a well-defined business problem, and you’ll only know if you’ve hit the mark if you precisely define what success looks like. Depending on the business objective, AI will commonly target profitability, customer experience, or efficiency. Automation from AI can yield cost savings or costs that are redirected to other uses. ... Treat your data as a treasured asset. While data quality and merging disparate data sources are common challenges, one of the biggest challenges in data integration initiatives is streamlining, if not automating, the process of turning data into actionable insights. ... If you are looking to develop AI capabilities in-house, keep in mind that AI teams can benefit from having a balance of skillsets. For example, deep expertise in modeling is critical for thorough research and solution development. Data engineering skills are essential in order to execute the solution. Your AI teams also need leaders who understand the technology, at least enough to know what is and is not possible. In running an AI team, it is important to create an environment that fosters creativity but provides structure. Keep the AI team connected to business leaders in the organization to ensure that AI is being applied to high-priority, high-value use cases that are properly framed.

How special relativity can help AI predict the future

Researchers have tried various ways to help computers predict what might happen next. Existing approaches train a machine-learning model frame by frame to spot patterns in sequences of actions. Show the AI a few frames of a train pulling out of a station and then ask it to generate the next few frames in the sequence, for example. AIs can do a good job of predicting a few frames into the future, but the accuracy falls off sharply after five or 10 frames, says Athanasios Vlontzos at Imperial College London. Because the AI uses preceding frames to generate the next one in the sequence, small mistakes made early on—a few glitchy pixels, say—get compounded into larger errors as the sequence progresses. Vlontzos and his colleagues wanted to try a different approach. Instead of getting an AI to learn to predict a specific sequence of future frames by watching millions of video clips, they allowed it to generate a whole range of frames that were roughly similar to the preceding ones and then pick those that were most likely to come next. The AI can make guesses about the future without having to learn anything about the progression of time, says Vlontzos.

TypeScript's co-creator speaks out on TypeScript 4.0

TypeScript was one of several efforts inside and outside Microsoft in those few years to try and tackle this need -- first for large companies like Microsoft and Google, but ultimately for the broader industry that was all moving in the same direction. Other options, like Google Dart, tried to replace JavaScript, but this proved to present too large a compatibility gap with the web as it was and is. TypeScript, by being a superset of JavaScript, was compatible with the real web, and yet also provided the tooling and scalability that were needed for the large and complex web applications of the early 2010s. Today, that scale and complexity is now commonplace, and is the standard of any SaaS company or internal enterprise LOB [line of business] application. And TypeScript plays the same role today, just for a much larger segment of the market. ... TypeScript's biggest contribution has been in bringing amazing developer tools and IDE experiences to the JavaScript ecosystem. By bringing types to JavaScript, so many error-checking, IDE tooling, API documentation and other developer productivity benefits light up. It's the experience with these developer productivity benefits that has driven hundreds of thousands of developers to use TypeScript.

Enabling transformation: How can security teams shift their perception?

There are clear opportunities to deliver this transformation through the adoption of a unified security approach. By this, we mean the integration, rationalisation and centralisation of security environments into a holistic ecosystem. Adopting such an approach can help improve the operator experience and make things simpler for the teams charged with maintenance – while also providing a cure to the headaches caused by platform proliferation. Not only this, but a unified security approach is a key enabler in helping security leaders engage at the board level by delivering cost transformation. An integrated security environment will serve to streamline operations for security teams, allowing staff to focus on higher value tasks while automating repetitive processes. In business terms, this means clawing back up to 155 days’ worth of effort for the average UK security team. Clearly, cost reduction and operational efficiencies are central to demonstrating business impact, but they should be viewed as a starting point rather than a security teams’ entire value proposition.

Deep Learning Models for Multi-Output Regression

Neural network models also support multi-output regression and have the benefit of learning a continuous function that can model a more graceful relationship between changes in input and output. Multi-output regression can be supported directly by neural networks simply by specifying the number of target variables there are in the problem as the number of nodes in the output layer. For example, a task that has three output variables will require a neural network output layer with three nodes in the output layer, each with the linear (default) activation function. We can demonstrate this using the Keras deep learning library. We will define a multilayer perceptron (MLP) model for the multi-output regression task defined in the previous section. Each sample has 10 inputs and three outputs, therefore, the network requires an input layer that expects 10 inputs specified via the “input_dim” argument in the first hidden layer and three nodes in the output layer. We will use the popular ReLU activation function in the hidden layer. The hidden layer has 20 nodes, which were chosen after some trial and error. We will fit the model using mean absolute error (MAE) loss and the Adam version of stochastic gradient descent.

Machine learning wards off threats at TV studio Bunim Murray

While its name is probably little-known to most viewers, Bunim Murray is kind of a big deal in TV. Founded in the late 1980s when two TV producers were flung together to produce a so-called ‘unscripted soap opera’ for the MTV network, the resulting show, The Real World, was instrumental in establishing the reality TV genre. The new company went on to develop global hits including Keeping Up With The Kardashians, Project Runway and The Simple Life. Bunim Murray’s CTO Gabe Cortina arrived at the firm with the infamous 2014 hack on Sony Pictures weighing on his mind. This incident centred on the release of The Interview, a comedy starring Seth Rogen and James Franco which depicted the fictionalised assassination of North Korean dictator Kim Jong-Un. Likely perpetrated by groups with links to the North Korean state, the large-scale leak of data from the studio caused great embarrassment for many high-profile individuals. From the get-go, Cortina understood that a similar kind of breach could be seriously damaging to Bunim Murray. “We’ve been in business for 30 years. We have a strong brand and we’re known for delivering high-quality shows,” he tells Computer Weekly.

Security Concerns for Peripheral APIs on the Web

To ensure a relatively secure browsing experience, browsers sandbox websites - providing only limited access to the rest of the computer and even other websites that are open on different tabs/windows. What differentiates Web Bluetooth/USB APIs to other Web APIs such as the MediaStream or Geolocation that received wide adaptation from all browser vendors is the specificity which they offer. When a user enters a website that uses the Geolocation API, the browser shows a pop-up requesting permission to access the current position. While approving this request can pose a security risk, the user makes a conscious decision to provide his or her location to the website. At the same time, the browser exposes a set of specific API calls (such as getCurrentPosition) that does exactly that. On the other hand, Bluetooth and USB communication work on a lower level, making it difficult to discern which actions are being performed by the website. For example, Web Bluetooth communicating with a device happens using the writeValue that accepts arbitrary data and can cause any number of actions on the target device.

Regulated Blockchain: A New Dawn in Technological Advancement

What a regulated blockchain portends is that the impact the negative statements from government officials and the media along with regulatory uncertainties have been having on entrepreneurs, investors, the market, and the industry at large, will be a thing of the past. One area where we have started seeing the positive impact and transformation in technology is the case of the digital currency. The internet was the precursor of cashless policy and internet banking all of which greatly reduced the stress people had to go through to conduct businesses. The Chinese Government vehemently opposed cryptocurrency because it was decentralized but it’s of great relief to see that the People Bank of China (PBOC) is at the forefront of legitimizing digital currency. As a part of a pilot program, PBOC introduced a homegrown digital currency across four cities, this is a huge leap towards actualizing the first electronic payment system by a major central bank. The Bank of England (BoE) is also toeing the footsteps of China but at a review stage as of July 2020. Andrew Bailey, the Governor of BoE was reported to have said, “I think in a few years, we will be heading toward some sort of digital currency.”

It’s never the data breach -- it’s always the cover-up

This is a warning to CSOs and CISOs: Remove all sense of impropriety in IR. Concealing a data breach is illegal. Every decision made during an incident might be used in litigation and will be scrutinized by investigators. In this case, it's also led to criminal charges filed against a well-known security leader. If your actions seem to conceal rather than investigate and resolve a data breach, expect consequences. Neither the ransom nor the bug bounty are at issue here. Paying the ransom through the bug bounty was alleged to help conceal the breach. Firms should develop a digital extortion policy, so that there are no allegations of impropriety should they choose to pay a ransom. In addition, the guidelines of your bug bounty program should not be altered on the fly to facilitate non-bug bounty program activities.  Work closely and openly with senior leadership on breaches and issues of ransom. Sullivan tried to get the hackers to sign non-disclosure agreements -- a legal document between two legitimate entities effectively acknowledging the hackers as business entities -- which allowed Uber to treat the hackers as third parties. Treating the ransom as a "cost of doing business" helped them conceal the payment from the management team as well.

Quote for the day:

"What I've really learned over time is that optimism is a very, very important part of leadership." -- Bob Iger

Daily Tech Digest - August 28, 2020

The Merging Of Human And Machine. Two Frontiers Of Emerging Technologies

The field of human and biological applications has many applications for medical science. This includes precision medicine, genome sequencing and gene editing (CRISPR), cellular implants, and wearables that can be implanted in the human body The medical community is experimenting with delivering nano-scale drugs (including anti-biotic “smart bombs” to target specific strains of bacteria. Soon they will be able to implant devices such as bionic eyes and bionic kidneys, or artificially grown and regenerated human organs. Succinctly, we are on the cusp of significantly upgrading the human ecosystem. It is indeed revolutionary. This revolution will expand exponentially in the next few years. We will see the merging of artificial circuitries with signatures of our biological intelligence, retrieved in the form of electric, magnetic, and mechanical transductions. Retrieving these signatures will be like taking pieces of cells (including our tissue-resident stem cells) in the form of “code” for their healthy, diseased or healing states, or a code for their ability to differentiate into all the mature cells of our body. This process will represent an unprecedented form of taking a glimpse of human identity.

Five ‘New Normal’ Imperatives for Retail Banking After COVID-19

The current financial crisis highlights an already trending need for responsible, community-minded banking. How financial institutions respond to the COVID-19 crisis — and the actions they take as the economy begins to right itself — will influence their reputations in the long-term. Personalized service and community-mindedness have never been more important. The approach to providing them, however, will often be different from the past. Data-powered audience segmentation can help banks and credit unions proactively anticipate the needs of their customers, then offer services and solutions to solve them. Voice-of-consumer and social listening tools can help financial institutions understand and monitor their brand perception. It’s important to develop a process and allocate resources to engage with consumers in the digital space. For example, when complaints or concerns are raised on social media or other channels, they should be triaged quickly and effectively. If this capability is something you previously have put off developing, it’s time to re-prioritize. According to EY’s Future Consumer Index, only 17% of consumers surveyed said they trusted their financial institutions in a time of crisis. 

The 7 Benefits of DataOps You’ve Never Heard

The convergence of point-solution products into end-to-end platforms has made modern DataOps possible. Agile software that manages, governs, curates, and provisions data across the entire supply chain enables efficiencies, detailed lineage, collaboration, and data virtualization, to name a few benefits. While many point-solutions will continue, success today comes from having a layer of abstraction that connects and optimizes every stage of the data lifecycle, across vendors and clouds, in order to streamline and protect the full ecosystem. As machine-learning and AI applications expand, the successful outcome of these initiatives depends on expert data curation, which involves the preparation of the data, automated controls to reduce the risks inherent in data analysis, and collaborative access to as much information as possible. Data collaboration, like other types of collaboration, fosters better insights, new ideas, and overcomes analytic hurdles. While often considered a downstream discipline, providing collaboration features across data discovery, augmented data management, and provisioning results in better AI/ML outcomes. In our COVID-19 age, collaboration has become even more important, and the best of today’s DataOps platforms offer benefits that break down the barriers of remote work, departmental divisions, and competing business goals.

Generation Data: the future of cloud era leaders

What’s more, with most organisations adopting multiple cloud environments, data is more fragmented than ever before. As such, businesses are looking to data governance specialists (not just data scientists, but data engineers too) to ensure that there is a catalogue of where the data resides, across the different landscapes to ensure it’s well secured and well governed. It’s important to have people who can spot risks associated with where data is – or in some cases – isn’t stored, whilst deploying artificial intelligence (AI) to adopt new roles to secure it within the cloud environments. Cloud specialists can take on several different job titles within the business and at some organisations, a single data leader like the CDO must seamlessly shift between multiple roles in order to achieve success. Meanwhile at others, a team of data leaders each having a specialised role under a unified data strategy is a model for success. What’s clear is that as data becomes part of everyone’s working lives, to ensure we’re not short on talent, businesses need to engage with a wide range of individuals such as citizen integrators and citizen analysts to upskill within existing roles and to truly democratise data. This means equipping existing and future employees with the skills needed to garner insights from existing data sets.

Standing Out with Invisible Payments: The Banking-as-a-Service Paradox

Industry players, such as regulators, FinTech partners, and businesses in the banking, financial services and insurance industries, are starting to realise that it is not ideal to be a ‘jack-of-all-trades’. In fact, the core of BaaS is built upon strategic collaboration. As such, there should be more acceptance of strengths and weaknesses from financial players so they can better identify what they are good at and what they need help with. Essentially, financial players need to ‘piggyback’ on either big banks or other financial service partners with strong regulatory license network. Furthermore, if they can identify a market that is underpenetrated, this is a good opportunity to work with existing players to fill the gap. For instance, the recent partnership between InstaReM and SBM Bank India allow users to remit money to more markets and send funds overseas in real-time. As its licensed banking partner, InstaReM will facilitate international money transfers from India to an expanded list of markets, including new destinations such as Malaysia and Hong Kong. In another example, Nium’s partnership with Geoswift, an innovative payment technology company, will enable overseas customers to remit money into China.

How state and local governments can better combat cyberattacks

Hit by ransomware and other attacks, state and local governments are obviously aware of the need for strong cybersecurity. And they have taken certain measures to beef up security. Many local governments have hired top cybersecurity people and created more effective teams. The recent Congressional Solarium Commission on Cybersecurity stressed the need for better security coordination among local governments, the federal government, and the private sector. The State and Local Government Cybersecurity Act of 2019 legislation passed last year is designed to foster a greater collaboration among the different parties. But government agencies are not all alike, especially on a local vs. state level. Differences exist in funding and preparedness. Security policies can vary from one agency to another. Plus, the effort to digitize systems and services at such a rapid pace means that security sometimes gets left behind. Looking at open-source data on 108 cyberattacks on state and local municipalities from 2017 to late 2019, BlueVoyant found that the number rose by almost 50%. Over the same time, ransomware demands surged from a low of $30,000 a month to as high as almost $500,000 in July 2019, according to the report.

How AI can enhance your approach to DevOps

Companies can resort to AI data mapping techniques to accelerate data transformation processes. At the same time, machine learning (ML) used in data mapping will also automate data integrations, allowing businesses to extract business intelligence and drive important business decisions quickly. Taking it a step further, organizations can push for AI/ML-powered DevOps for self-healing and self-managing processes, preventing abrupt disruptions and script breaks. Besides that, organizations can opt for AI to recommend solutions to write more efficient and resilient code, based on the analysis of past application builds and performance. The ability of AI and ML to scan through troves of data with higher precision will play an essential role in delivering better security. Through a centralized logging architecture, employees can detect and highlight any suspicious activities on the network. With the help of AI, organizations can track and learn of the hacker’s motive in trying to breach a system. This capability will help DevOps teams to navigate through existing threats and mitigate the impact.  Communication is also a vital component in DevOps strategy, but it’s often one of the biggest challenges when organizations move to the methodology when so much information is flowing through the system.

Shifting the mindset from cloud-first to cloud-best using hybrid IT

Those who are approaching the cloud for the first time face the classic question around which type of service to choose, public or private. Both have different use-cases and can be critical for businesses in achieving their objectives. For instance, the public cloud is agile, scalable and simple to use, great for teams looking to get up and running quickly. However, the private cloud offers its own benefits, chiefly a greater degree of control over data and performance. As organisations hosting their data in a private cloud are in full control of that data, there’s typically a more consistent security posture and a greater degree of flexibility and control over how that data is used and managed. Moreover, the private cloud can typically deliver faster and higher through-put environments for those mission critical applications that cannot run in the public cloud without business impacting performance issues. However, companies risk getting caught in the cloud divide, feeling as though public cloud is not appropriate for their enterprise applications, or that on-prem enterprise infrastructure isn’t as user-friendly, simple or scalable as the public cloud. Ultimately organisations should be able to make infrastructure choices based on what’s best for their business, not constrained by what the technology can do or where it lives.

5 Critical IT Roles for Rapid Digital Transformation

Information security leaders are the individuals who protect the information and activity of an organization. These professionals lead the charge in establishing the appropriate security standards and implementing the best policies and procedures needed to prevent security breaches and confiscated data. As more information and activity happens within the cloud infrastructure during a transformation, security has to be a top priority. Given the current situation, the rise of online activity has led to an increase in cyber-attacks. With digital transformation, businesses need assurance that their technologies are adequately protected. An InfoSec leader will help quarterback the security game plan as well as monitor for abnormal activity and handle the recovery should any issues arise. Since data analysts are there to retrieve, gather and analyze data, they hold an important role in the digital transformation journey. Technology opens the doors to a world of data that must be uncovered and understood to deliver any real value. The insights data analysts can provide allow organizations to take a data-driven approach to the decision-making process. Since there is a lot of uncertainty in the current business climate, data analysts are a huge asset. 

Facing gender bias in facial recognition technology

Our team performed two separate tests – first using Amazon Rekognition and the second using Dlib. Unfortunately, with Amazon Rekognition we were unable to unpack just how their ML modeling and algorithm works due to transparency issues (although we assume it’s similar to Dlib). Dlib is a different story, and uses local resources to identify faces provided to it. It comes pretrained to identify the location of a face, and with face location finder HOG, a slower CPU-based algorithm, and CNN, a faster algorithm making use of specialized processors found in a graphics cards. Both services provide match results with additional information. Besides the match found, a similarity score is given that shows how close a face must match to the known face. If the face on file doesn’t exist, a similarity score set to low may incorrectly match a face. However, a face can have a low similarity score and still match when the image doesn’t show the face clearly. For the data set, we used a database of faces called Labeled Faces in the Wild, and we only investigated faces that matched another face in the database. This allowed us to test matching faces and similarity scores at the same time. Amazon Rekognition correctly identified all pictures we provided.

Quote for the day:

"Hold yourself responsible for a higher standard than anybody expects of you. Never excuse yourself." -- Henry Ward Beecher

Daily Tech Digest - August 27, 2020

Different Ways In Which Enterprises Can Utilize Business Intelligence

Embedded BI is simply the integration of self-service BI into ordinarily utilized business applications. BI devices boost an improved user experience with visualization, real-time analytics and interactive reporting. A dashboard might be given within the application to show important information, or different diagrams, charts and reports might be created for immediate review. A few types of embedded BI stretch out functionality to cell phones to guarantee a distributed workforce that can approach indistinguishable business intelligence for synergistic efforts in real time. At a further advanced level, embedded BI can turn out to be a piece of workflow automation, with the goal that specific actions are set off consequently dependent on boundaries set by the end user or other decision makers. Regardless of the name, embedded BI normally is deployed close by the enterprise application instead of being facilitated within it. Both Web-based and cloud-based BI are available for use with a wide assortment of business applications. Self-Service Analytics permits end users to effectively dissect their information by making their own reports and changing existing ones without the requirement for training.

Conway's Law, DDD, and Microservices

In Domain-Driven Design, the idea of a bounded context is used to provide a level of encapsulation to a system. Within that context, a certain set of assumptions, ubiquitous language, and a particular model all apply. Outside of it, other assumptions may be in place. For obvious reasons, it's recommended that there be a correlation between teams and bounded contexts, since otherwise it's very easy to break the encapsulation and apply the wrong assumptions, language, or model to a given context. Microservices are focused, independently deployable units of functionality within an organization or system. They map very well to bounded contexts, which is one reason why DDD is frequently applied to them. In order to be truly independent from other parts of the system, a microservice should have its own build pipeline, its own data storage infrastructure, etc. In many organizations, a given microservice has a dedicated team responsible it (and frequently others as well). It would be unusual, and probably inefficient, to have a microservice that any number of different teams all share responsibility for maintaining and deploying.

Cybersecurity at a crossroads: Moving toward trust in our technologies

Many of the foundational protocols and applications simply assumed trust; tools we take for granted like email were designed for smaller networks in which participants literally knew each other personally. To address attacks on these tools, measures like encryption, complex passwords, and other security-focused technologies were applied, but that didn't address the fundamental issue of trust. All the complex passwords, training, and encryption technologies in the universe won't prevent a harried executive from clicking on a link in an email that looks legitimate enough, unless we train that executive to no longer trust anything in their inbox, which compromises the utility of email as a business tool. If we're going to continue to use these core technologies in our personal and business lives, we as technology leaders need to shift our focus from a security arms race, which is easily defeated by fallible humans, to incorporating trust into our technology. Incorporating trust makes good business sense at a basic level; I'd happily pay a bit extra for a home security device that I trust not to be mining bitcoin or sending images to hackers in a distant land, just as businesses who've seen the very real costs of ransomware would happily pay for an ability to quickly identify untrusted actors.

Deep Fake: Setting the Stage for Next-Gen Social Engineering

In order to safeguard against BEC, we often advise our clients to validate the suspicious request by obtaining second-level validations, such as picking up the phone and calling the solicitor directly. Other means of digital communications—cellular text or instant messaging—can be utilized to ensure the validity of the transaction and are highly recommended. These additional validation measures would normally be enough to thwart scams. As organizations start to elevate security awareness amongst their user community, these types of tricks are becoming less effective. But threat actors are also evolving their strategy and are finding new and novel ways of improving their chances for success. This scenario might seem far-fetched or highly fictionalized, but an attack of this sophistication was executed successfully last year. Could deep fake be utilized to enhance a BEC scam? What if threat actors can gain the ability to synthesize the voice of the company's CEO?  The scam was initially executed utilizing the synthesized voice of a company's executive, demanding the person on the other line to pay an overdue invoice.

Did Your Last DevOps Strategy Fail? Try Again

Don’t perform a shotgun wedding between ops and dev. Administrators and developers are drawn to their technology foci for personal reasons and interests. One of the most cited reasons for unsuccessful DevOps plans is a directive to homogenize the team, followed by shock this didn’t work. Developers are attracted to and rewarded for innovation and building new things, while admins take pride in finding ways to migrate the mission-critical apps everyone forgets about onto new hosting platforms. They’re complementary, integrable engineers, but they’re not interchangeable cogs. Contrary to popular opinion, telling developers they’re going to carry a pager for escalation doesn’t magically improve code quality and can slow innovation. They may even quit, even in this chaotic economy. And telling ops they need to learn code patterns, git merge and dev toolchains will be an unwelcome distraction not related to keeping their business running or meeting their personal review goals. They also may quit. It might be helpful to share with your team the simple idea you embrace: Unfounded stories of friction between dev and ops aren’t about the teams.

What is IPv6, and why aren’t we there yet?

Adoption of IPv6 has been delayed in part due to network address translation (NAT), which takes private IP addresses and turns them into public IP addresses. That way a corporate machine with a private IP address can send to and receive packets from machines located outside the private network that have public IP addresses. Without NAT, large corporations with thousands or tens of thousands of computers would devour enormous quantities of public IPv4 addresses if they wanted to communicate with the outside world. But those IPv4 addresses are limited and nearing exhaustion to the point of having to be rationed. NAT helps alleviate the problem. With NAT, thousands of privately addressed computers can be presented to the public internet by a NAT machine such as a firewall or router. The way NAT works is when a corporate computer with a private IP address sends a packet to a public IP address outside the corporate network, it first goes to the NAT device. The NAT notes the packet’s source and destination addresses in a translation table. The NAT changes the source address of the packet to the public-facing address of the NAT device and sends it along to the external destination.

Five DevOps lessons: Kubernetes to scale secure access control

Failure is a very real factor when trying to transform from a virtual and bare metal server farm to a distributed cluster, so determine how your services can scale and communicate if you’re geographically separating your data and customers. Clusters operate differently at scale than your traditional server farms, and containers have a completely different security paradigm than your average virtualised application stack. Be prepared to tweak your cluster layouts and namespaces as you begin your designs and trials. Become agile with Infrastructure as Code (IAC), and be willing to make multiple proof-of-concepts when deploying. Tests can take hours and teardown and standup can be painful when making micro-tweaks along the way. If you do this, you will remove larger scaling considerations with a good base for faster and larger scale. My advice is to keep your core components close and design for relay points or services when attempting to port into containers, or into multi-cluster designs. ... Sidecar design patterns, although wonderful conceptually, can either go incredibly right or horribly wrong. Kubernetes sidecars provide non-intrusive capabilities, such as reacting to Kubernetes API calls, setting up config files, or filtering data from the main containers.

A new IT landscape empowers the CIO to mix and match

Platforms like Zapier or Integromat that deliver off-the-shelf integrations for hundreds of popular IT applications as well as integration platforms-as-a-service (iPaas) like Jitterbit, Outsystems, or TIBCO Cloud Integration that make it easy for IT -- or even citizen developers -- to quickly remix apps and data into new solutions has dramatically changed the art-of-the-possible in IT. So, at least technically, creating new high value digital experiences out of existing IT is now not just possible, but can be made commonplace. The rest has become a vendor management, product skillset, and management/governance issue. The major industry achievements of ease-of-integration and ready IT mix-and-match must go up against the giants in the industry who have very entrenched relationships with IT departments today. That's not to say that CIOs aren't avidly interested in avoiding vendor lock-in, accelerating customer delivery, bringing more choice to their stakeholders, satisfying needs more precisely and exactly than ever before, or becoming more relevant again in general as IT is increasingly competing directly with external SaaS offering, outside service providers, and enterprise app stores, to name just three capable IT sourcing alternatives to lines of business.

Reaping Benefits Of Data Democratization Through Data Governance

We characterize the integration of data democratization with data governance as an all-encompassing approach to overseeing information that spans the governance groups and all information stakeholders, as well as the strategies and rules they make, and the metrics they measure accomplishment by. Governed data democratization permits you to clearly understand your data set and to connect all the policies and controls that apply to it. Governed data democratization is how you set up the important privacy strategies to guarantee that you maintain consumer loyalty and simultaneously ensure that your association is strictly in compliance with both external regulatory commands and internal security protocols. Furthermore, it’s on this establishment of data governance that you convey the correct information to the correct customers with the right quality and the right level of trust. Intelligent, incorporated, and efficient data governance strategy scales your company’s capacity to quickly and cost-effectively increase data management by consolidating the data governance work process with a data democratization system that incorporates self-administration capabilities.

How chatbots are making banking extra conversational

AI isn’t any new idea, after all, however its uptake within the banking business has been accelerated by consciousness of the necessity to improve digital experiences and the supply of open-source instruments from the likes of Google, Amazon, and different new entrants which — when mixed with plenty of the client and business information — have made the know-how easy, quick and highly effective. Like another enterprise, banks are below stress to maneuver rapidly with know-how or lose out to extra hungry and impressive rivals and aggressive new children on the block. With Gartner predicting that prospects will handle 85% of their relationships with an enterprise with out interacting with a human, and TechEmergence believing chatbots will change into the first shopper utility throughout the subsequent 5 years, conversational AI is now a collection focus.  And whereas digitization has been going down in banking for many years, maintaining tempo with prospects’ expectations for fast, handy, safe providers that may be accessed from wherever on any machine isn’t any imply feat, particularly as society barrels nearer to a cashless future by the day.

Quote for the day:

"You never change things by fighting the existing reality. build a new model that makes the existing model obsolete." -- Buckminster Fuller

Daily Tech Digest - August 26, 2020

New Zealand stock exchange hit by cyber attack for second day

The incident follows a number of alleged cyber attacks by foreign actors, such as the targeting of a range of government and private-sector organisations in Australia. In a statement earlier on Wednesday, the NZX blamed Tuesday’s attack on overseas hackers, saying that it had “experienced a volumetric DDoS attack from offshore via its network service provider, which impacted NZX network connectivity”. It said the attack had affected NZX websites and the markets announcement platform, causing it to call a trading halt at 3.57pm. It said the attack had been “mitigated” and that normal market operations would resume on Wednesday, but this subsequent attack has raised questions about security. A DDoS attack aims to overload traffic to internet sites by infecting large numbers of computers with malware that bombards the targeted site with requests for access. Prof Dave Parry, of the computer science department at Auckland University of Technology, said it was a “very serious attack” on New Zealand’s critical infrastructure. He warned that it showed a “rare” level of sophistication and determination, and also flagged security issues possibly caused by so many people working from home.

Disruption Happens With What You Don't Know You Don't Know

"There are things we know we don't know, and there are things we don't know we don't know." And what I'm trying to explain is the practitioners point of view. Now, when I come to you and I tell you, you know what, "You don't know this, Peter. You don't know this. And if you do this, you would be making a lot of money." You will say, "Who are you to tell me that?" So I need to build confidence first. So the first part of the discussion starts from telling you what you already know. So when you do use the data, the idea is — and to create a report, and that's what reports are for. Look at how organizations make decisions — what they do is they get a report and they take a decision on that report. But 95% of the time, I know that people who are making that decision or are reading that report know the answer in the report. That's why they're comfortable with the report, right? So let's look at a board meeting where the board has a hunch that this quarter they're going to make 25% increase in their sales. They have the hunch. Now, that is where they're going to get a report which will save 24% or 29%, it will be in the ballpark range. So there's no unknown. But if I'm only telling you what you already know, 

How the Coronavirus Pandemic Is Breaking Artificial Intelligence and How to Fix It

When trained on huge data sets, machine learning algorithms often ferret out subtle correlations between data points that would have gone unnoticed to human analysts. These patterns enable them to make forecasts and predictions that are useful most of the time for their designated purpose, even if they’re not always logical. For instance, a machine-learning algorithm that predicts customer behavior might discover that people who eat out at restaurants more often are more likely to shop at a particular kind of grocery store, or maybe customers who shop online a lot are more likely to buy certain brands. “All of those correlations between different variables of the economy are ripe for use by machine learning models, which can leverage them to make better predictions. But those correlations can be ephemeral, and highly context-dependent,” David Cox, IBM director at the MIT-IBM Watson AI Lab, told Gizmodo. “What happens when the ground conditions change, as they just did globally when covid-19 hit? Customer behavior has radically changed, and many of those old correlations no longer hold. How often you eat out no longer predicts where you’ll buy groceries, because dramatically fewer people eat out.”

How the edge and the cloud tackle latency, security and bandwidth issues

With the rise of IoT, edge computing is rapidly gaining popularity as it solves the issues the IoT has when interacting with the cloud. If you picture all your smart devices in a circle, the cloud is centralised in the middle of them; edge computing happens on the edge of that cloud. Literally referring to geographic location, edge computing happens much nearer a device or business, whatever ‘thing’ is transmitting the data. These computing resources are decentralised from data centres; they are on the ‘edge’, and it is here that the data gets processed. With edge computing, data is scrutinised and analysed at the site of production, with only relevant data being sent to the cloud for storage. This means much less data is being sent to the cloud, reducing bandwidth use, privacy and security breaches are more likely at the site of the device making ‘hacking’ a device much harder, and the speed of interaction with data increases dramatically. While edge and cloud computing are often seen as mutually exclusive approaches, larger IoT projects frequently require a combination of both. Take driverless cars as an example.

Basing Enterprise Architecture on Business Strategy: 4 Lessons for Architects

Analogous ideas regarding the primacy of the business strategy are also expressed by other authors, who argue that EA and IT planning efforts in organizations should stem directly from the business strategy. Bernard states that “the idea of Enterprise Architecture is that of integrating strategy, business, and technology”. Parker and Brooks argue that the business strategy and EA are interrelated so closely that they represent “the chicken or the egg” dilemma. These views are supported by Gartner whose analysts explicitly define EA as “the process of translating business vision and strategy into effective enterprise change”. Moreover, Gartner analysts argue that “the strategy analysis is the foundation of the EA effort” and propose six best practices to align EA with the business strategy. Unsurprisingly, similar views are also shared by academic researchers, who analyze the integration between the business strategy and EA modeling of the business strategy in the EA context. To summarize, in the existing EA literature the business strategy is widely considered as the necessary basis for EA and for many authors the very concepts of business strategy and EA are inextricably coupled

Building Effective Microservices with gRPC, Ballerina, and Go

In modern microservice architecture, we can categorize microservices into two main groups based on their interaction and communication. The first group of microservices acts as external-facing microservices, which are directly exposed to consumers. They are mainly HTTP-based APIs that use conventional text-based messaging payloads (JSON, XML, etc.) that are optimized for external developers, and use Representational State Transfer (REST) as the de facto communication technology.  REST’s ubiquity and rich ecosystem play a vital role in the success of these external-facing microservices. OpenAPI provides well-defined specifications for describing, producing, consuming, and visualizing these REST APIs. API management systems work well with these APIs and provide security, rate limiting, caching, and monetizing along with business requirements. GraphQL can be an alternative for the HTTP-based REST APIs but it is out of scope for this article. The other group of microservices are internal and don’t communicate with external systems or external developers. These microservices interact with each other to complete a given set of tasks. Internal microservices use either synchronous or asynchronous communication.

Lessons learned after migrating 25+ projects to .NET Core

One thing that you need to be aware of when jumping from .NET Framework to .NET Core, is a faster roll-out of new versions. That includes shorter support intervals too. With .NET Framework, 10 years of support wasn't unseen, where .NET Core 3 years seem like the normal interval. Also, when picking which version of .NET Core you want to target, you need to look into the support level of each version. Microsoft marks certain versions with long time support (LTS) which is around 3 years, while others are versions in between. Stable, but still versions with a shorter support period. Overall, these changes require you to update the .NET Core version more often than you have been used to or accept to run on an unsupported framework version. ... The upgrade path isn't exactly straight-forward. There might be some tools to help with this, but I ended up migrating everything by hand. For each website, I took a copy of the entire repo. Then deleted all files in the working folder and created a new ASP.NET Core MVC project. I then ported each thing one by one. Starting with copying in controllers, views, and models and making some global search-replace patterns to make it compile.

Changing How We Think About Change

Many of the most impressive and successful corporate pivots of the past decade have taken the form of changes of activity — continuing with the same strategic path but fundamentally changing the activities used to pursue it. Think Netflix transitioning from a DVD-by-mail business to a streaming service; Adobe and Microsoft moving from software sales models to monthly subscription businesses; Walmart evolving from physical retail to omnichannel retail; and Amazon expanding into physical retailing with its Whole Foods acquisition and launch of Amazon Go. Further confusing the situation for decision makers is the ill-defined relationship between innovation and change. Most media commentary focuses on one specific form of innovation: disruptive innovation, in which the functioning of an entire industry is changed through the use of next-generation technologies or a new combination of existing technologies. (For example, the integration of GPS, smartphones, and electronic payment systems — all established technologies — made the sharing economy possible.) In reality, the most common form of innovation announced by public companies is digital transformation initiatives designed to enhance execution of the existing strategy by replacing manual and analog processes with digital ones.

More than regulation – how PSD2 will be a key driving force for an Open Banking future

A crucial factor standing in the way of the acceleration towards Open Banking has been the delay to API development. These APIs are the technology that TPPs rely on to migrate their services and customer base to remain PSD2 compliant. One of the contributing factors was that the RTS, which apply to PSD2, left room for too many different interpretations. This ambiguity caused banks to slip behind and delay the creation of their APIs. This delay hindered European TPPs in migrating their services without losing their customer base, particularly outside the UK, where there has been no regulatory extension and where the API framework is the least advanced. Levels of awareness of the new regulations and changes to how customers access bank accounts and make online payments are very low among consumers and merchants. This leads to confusion and distrust of the authentication process in advance of the SCA roll-out. Moreover, because the majority of customers don’t know about Open Banking yet, they aren’t aware of the benefits. Without customer awareness and demand it may be very hard for TPPs to generate interest and uptake for their products.

Election Security's Sticky Problem: Attackers Who Don't Attack Votes

One of the lessons drawn by both sides was how inexpensive it was for the red team to have an impact on the election process. There was no need to "spend" a zero-day or invest in novel exploits. Manipulating social media is a known tactic today, while robocalls are cheap-to-free. Countering the red team's tactics relied on coordination between the various government authorities and ensuring communication redundancy between agencies. Anticipating disinformation plans that might lead to unrest also worked well for the blue team, as red team efforts to bring violence to polling places were put down before they bore fruit. The red team also tried to interfere with voting by mail; they hacked a major online retailer to send more packages through the USPS than normal, and used label printers to put bar codes with instructions for resetting sorting machines on a small percentage of those packages. While there was some slowdown, there was no significant disruption of the mail around the election.

Quote for the day:

"It's hard to get the big picture when you have a small frame of reference." -- Joshing Stern