Daily Tech Digest - June 06, 2019

Cisco will use AI/ML to boost intent-based networking

ai vendor relationship management bar code purple artificial intelligence hand on virtual screen
“By applying machine learning and related machine reasoning, assurance can also sift through the massive amount of data related to such a global event to correctly identify if there are any problems arising. We can then get solutions to these issues – and even automatically apply solutions – more quickly and more reliably than before,” Apostolopoulos said. In this case, assurance could identify that the use of WAN bandwidth to certain sites is increasing at a rate that will saturate the network paths and could proactively reroute some of the WAN flows through alternative paths to prevent congestion from occurring, Apostolopoulos wrote.  “In prior systems, this problem would typically only be recognized after the bandwidth bottleneck occurred and users experienced a drop in call quality or even lost their connection to the meeting. It would be challenging or impossible to identify the issue in real time, much less to fix it before it distracted from the experience of the meeting. Accurate and fast identification through ML and MR coupled with intelligent automation through the feedback loop is key to successful outcome.”



DevOps security best practices span code creation to compliance


As software development velocity increases with the adoption of continuous approaches, such as Agile and DevOps, traditional security measures struggle to keep pace. DevOps enables quicker software creation and deployment, but flaws and vulnerabilities proliferate much faster. As a result, organizations must systematically change their approaches to integrate security throughout the DevOps pipeline. ... Software security often starts with the codebase. Developers grapple with countless oversights and vulnerabilities, including buffer overflows; authorization bypasses, such as not requiring passwords for critical functions; overlooked hardware vulnerabilities, such as Spectre and Meltdown; and ignored network vulnerabilities, such as OS command or SQL injection. The emergence of APIs for software integration and extensibility opens the door to security vulnerabilities, such as lax authentication and data loss from unencrypted data sniffing. Developers' responsibilities increasingly include security awareness: They must use security best practices to write hardened code from the start and spot potential security weaknesses in others' code.


Reinforcement learning explained

Reinforcement learning explained
The environment may have many state variables. The agent performs actions according to a policy, which may change the state of the environment. The environment or the training algorithm can send the agent rewards or penalties to implement the reinforcement. These may modify the policy, which constitutes learning. For background, this is the scenario explored in the early 1950s by Richard Bellman, who developed dynamic programming to solve optimal control and Markov decision process problems. Dynamic programming is at the heart of many important algorithms for a variety of applications, and the Bellman equation is very much part of reinforcement learning. A reward signifies what is good immediately. A value, on the other hand, specifies what is good in the long run. In general, the value of a state is the expected sum of future rewards. Action choices—policies—need to be computed on the basis of long-term values, not immediate rewards. Effective policies for reinforcement learning need to balance greed or exploitation—going for the action that the current policy thinks will have the highest value


The Linux desktop's last, best shot


Closer to home in the West, companies are turning to Linux for their engineering and developer desktops. Mark Shuttleworth, founder of Ubuntu Linux and its corporate parent Canonical, recently told me: "We have seen companies signing up for Linux desktop support because they want to have fleets of Ubuntu desktop for their artificial intelligence engineers." Even Microsoft has figured out that advanced development work requires Linux. That's why Windows Subsystem for Linux (WSL) has become a default part of Windows 10.  So, the opportunity is there for Linux to grab some significant market share. My question is: "Is anyone ready to take advantage of this opportunity?" All the major Linux companies -- Canonical, Red Hat and SUSE -- support Linux desktops, though it's not a big part of their businesses. The groups which do focus on the desktop, such as Mint, MX Linux, Manjaro Linux, and elementary OS, are small and under-financed. So I can't see them delivering the support most users -- nevermind governments and companies -- need. 


DNS – a security opportunity not to be overlooked, says Nominet


“We are seeing a lot more breaches, and with many businesses embracing digital transformation, the attack surface is getting wider. But in many cases, having an understanding of what is going on in the DNS layer can reduce the impact of breaches and even prevent them,” said Reed. “DNS has an important role to play because it underpins the network activity of all organisations. And because around 90% of malware uses DNS to cause harm, DNS potentially provides visibility of malware before it does so.” In addition to providing organisations with an opportunity to intercept malware before it contacts its command and control infrastructure, DNS visibility enables organisations to see other indictors of compromise such as spikes in IP traffic and DNS hijacking. “Being able to track and monitor DNS activity is important as it enables organisations to identify phishing campaigns and the associated leakage of data. It also enables them to reduce the time attackers are in the network and spot new domains being spun up for malicious activity and data exfiltration,” said Reed.


The Sustainability Revolution Hits Retail

retail
Technology is paramount to building a truly sustainable business. Retailers are already applying advanced data analytics to supply chains to make the most of resources and reduce waste, which has a knock on effect in terms of sustainability. The Industrial Internet of Things (IIoT) will continue to improve operational efficiency across different organisations, cutting down on energy and expenditure. Despite debate over the sustainability of blockchain, distributed ledger technology could bring about the transparency that could kill environmentally or socially questionable products. Blockchain could provide visibility across the entire supply chain, so buyers know exactly where it came from, and how it was made. Richline Group, for example, is already using blockchain to ensure that its diamonds are ethically sourced. Materials science also has an important role in finding new materials that are cheaper and lower maintenance than existing alternatives. 3D printing is key to working with new materials, creating rapid prototypes for testing. The adoption of innovative manufacturing techniques like 3D printing and advanced robotics is hoped to make supply chains more efficient.


Blazor on the Server: The Good and the Unfortunate


If you're wondering what the difference is between Blazor and BotS ... well, from "the code on the ground" point of view, not much. It's pretty much impossible, just by looking at the code in a page, to tell whether you're working with Blazor-on-the-Client or Blazor-on-the-Server. The primary difference between the two -- where your C# code executes -- is hidden from you. With BotS, SignalR automatically connects activities in the browser with your C# code executing on the server. That SignalR support obviously makes Blazor solutions less scalable than other Web technologies because of SignalR's need to maintain WebSocket connections between the client and the server. However, that scalability issue may not be as much of a limitation as you might think. What BotS does do, however, is "normalize" a lot of the ad hoc ways that have been needed when working Blazor in previous releases. BotS components are, for example, just another part of an ASP.NET Core project and play well beside other ASP.NET Core technologies like Razor Pages, View Components and good old Controllers+Views. 


Self-learning sensor chips won’t need networks

Self-learning sensor chips won̢۪t need networks
Key to Fraunhofer IMS’s Artificial Intelligence for Embedded Systems (AIfES) is that the self-learning takes place at chip level rather than in the cloud or on a computer, and that it is independent of “connectivity towards a cloud or a powerful and resource-hungry processing entity.” But it still offers a “full AI mechanism, like independent learning,” It’s “decentralized AI,” says Fraunhofer IMS. "It’s not focused towards big-data processing.” Indeed, with these kinds of systems, no connection is actually required for the raw data, just for the post-analytical results, if indeed needed. Swarming can even replace that. Swarming lets sensors talk to one another, sharing relevant information without even getting a host network involved. “It is possible to build a network from small and adaptive systems that share tasks among themselves,” Fraunhofer IMS says. Other benefits in decentralized neural networks include that they can be more secure than the cloud. Because all processing takes place on the microprocessor, “no sensitive data needs to be transferred,” Fraunhofer IMS explains.


New RCE vulnerability impacts nearly half of the internet's email servers

email-pam.png
In a security alert shared with ZDNet earlier today, Qualys, a cyber-security firm specialized in cloud security and compliance, said it found a very dangerous vulnerability in Exim installations running versions 4.87 to 4.91. The vulnerability is described as a remote command execution -- different, but just as dangerous as a remote code execution flaw -- that lets a local or remote attacker run commands on the Exim server as root. Qualys said the vulnerability can be exploited instantly by a local attacker that has a presence on an email server, even with a low-privileged account. But the real danger comes from remote hackers exploiting the vulnerability, who can scan the internet for vulnerable servers, and take over systems. "To remotely exploit this vulnerability in the default configuration, an attacker must keep a connection to the vulnerable server open for 7 days (by transmitting one byte every few minutes)," researchers said. "However, because of the extreme complexity of Exim's code, we cannot guarantee that this exploitation method is unique; faster methods may exist."


What is CI/CD? Continuous integration and continuous delivery explained

Continuous integration is a development philosophy backed by process mechanics and some automation. When practicing CI, developers commit their code into the version control repository frequently and most teams have a minimal standard of committing code at least daily. The rationale behind this is that it’s easier to identify defects and other software quality issues on smaller code differentials rather than larger ones developed over extensive period of times. In addition, when developers work on shorter commit cycles, it is less likely for multiple developers to be editing the same code and requiring a merge when committing. Teams implementing continuous integration often start with version control configuration and practice definitions. Even though checking in code is done frequently, features and fixes are implemented on both short and longer time frames. Development teams practicing continuous integration use different techniques to control what features and code is ready for production.



Quote for the day:


"When building a team, I always search first for people who love to win. If I can't find any of those, I look for people who hate to lose." - H. Ross Perot


Daily Tech Digest - June 05, 2019

The Internet of Things enables a floating city of pleasure... and a vision of hell

shipiotcity.jpg
Every passenger and all the ship's staff carry a wireless Bluetooth and NFC-enabled medallion about the size of a fat 25-cent coin. Through a massive network of sensors and edge computing devices, the medallion controls the opening of cabin doors, ordering drinks, delivery of services, and in emergencies it ensures no one is missed. Facial recognition is used to identify passengers as they come on board. And their location is known at all times to the ship's captain through a large dashboard that also shows the exact location of each of the ship's workers. This location information is used in many ways -- like by cleaning staff to service a cabin when they notice it is empty. Previously, they had to rely on knocking or other signs of vacancy. It's also used to deliver drinks and food directly to the passenger. And the medallion automatically unlocks the cabin door before the passenger reaches it. Drinks and food are automatically charged to the passenger's account, and alcohol consumption is not monitored or flagged if excessive. The medallion is also used for funds in the ship's casino.



4 reasons why Agile works and the most common excuse when it doesn’t

This is clearly linked to the self-determination because when teams are setting their own deadlines there is automatically an increased level of confidence in the outcome and confidence is a critical component of success. In my experience teams are not afraid of hard work they are afraid of failure. And when you look at the stats around failure, one study showed that on projects that failed, 75% of the time, the teams involved knew it was going to fail on day 1. Now this lack of confidence can become a self-fulfilling prophecy, but by the same thinking, a belief that the project will be successful can also become self-fulfilling. When teams believe a project will fail, when it starts to fail they go into I told you so mode. However when they believe a project will succeed and it starts to fail they go into solution mode. Looking to find out what’s caused the issues and try to resolve them. Teams in solution mode will always outperform teams in I told you so mode.


Cloud computing and regulation: Following the eye of the storm

cloud
Out of the rapid growth of cloud computing technologies, we are starting to see a shift in how the law and regulation keep up. A major question mark looming over the sector is its lack of standardized guidance. Cloud computing is not governed by a specific “cloud law,” and no direct regulation applies to its services. Instead, the legal and regulatory landscape is made up of a matrix of different rules, as wide as the scope of the technology itself, spanning multiple industries and geographies. Given this breadth, there has been a gradual shift from legislative solutions to industry standardization as a means of closing the gap between regulation and the eye of the technological innovation storm. Whilst there is no direct legislation, some UK regulators, most notably in the financial services sector, have in recent years published guidance on the use of cloud technologies. This guidance focuses on how the technology can be used in compliance with existing regulatory rules, and whilst it has not set out a step-by-step process for deploying cloud technologies in compliance with regulatory requirements, it has shown that the regulators consider that there is no fundamental reason why firms cannot use cloud services in a regulatory compliant manner.


What is the cloud: beyond infrastructure as a service

Cloud adoption has grown rapidly, and today we find that almost all companies are using some form of cloud. However, research estimates that only approximately 20% of an enterprise's applications are in the cloud today. We are now entering chapter two, where we will focus on getting the next 80% of workloads — the mission-critical ones — to the cloud to optimize everything from supply chains to sales transactions. As we enter this next chapter, the definition of cloud is expanding and companies are now viewing it as an opportunity to incorporate existing IT and private cloud environments with new public cloud capabilities like AI and analytics completely underpinned by security. Moreover, they need to be able to easily choose where to deploy their workloads across all of these environments, which requires a commitment to open source technology and increased automation and management. This is a hybrid cloud approach, and this strategy is helping companies find new ways to solve age-old challenges, launch brand new business services, completely transform user and employee experiences, and much more.


Providing Drivers a Safety Net with Computer Vision


Synthetic Data is fast becoming an essential component of autonomous driving and computer vision AI systems. By bringing together techniques from the movie and gaming industries (simulation, CGI) together with emerging generative neural networks (GANs, VAE’s), we are now able to engineer perfectly-labeled, realistic datasets and simulated environments at scale. There is virtually no incremental cost of additional generated images and since the Synthetic Data is created all the attributes are known to pixel-perfect precision. Key labels such as depth, 3D position and partially obstructed objects are all provided by design. Application of this technology could allow important safety features to be brought to market quickly and cost-effectively, from crash prevention software to predictive maintenance, onboard diagnostics, and location insights.  Synthetic Data is a cost-effective solution that cuts down on the time and effort needed to acquire, clean and organize driver data. 


Data Uncertainty In The Time Of Brexit: How Business Can Protect Their Data

With so much noise surrounding Brexit and the constant changing circumstances and deadlines, it can be easy for businesses to bury their heads in the sand and wait for the dust to settle. However, it is crucial for businesses to take proactive measures to ensure their data processes stand the test of time. If they don’t act now, they will be left behind by quicker, more agile businesses. Data is the new currency for any business, and not being able to have an easy flow of data from the EU will seriously impact British business. Without the free flow of data to inform customer insights, markets trends, and competitor analysis, the revenue streams of UK businesses will be seriously impacted as delays in data governance, management and usage will put these businesses at a serious competitive disadvantage. With political decisions continuing to fluctuate, organisations need to be prepared. The outcome of good preparation should be the agility that enables organisational resilience in the face of disruption to international data flows.


Phishing attacks that bypass 2-factor authentication are now easier to execute  

CSO > Phishing attacks that bypass two-factor authentication
To overcome 2FA, attackers need to have their phishing websites function as proxies, forwarding requests on victims' behalf to the legitimate websites and delivering back responses in real time. The final goal is not to obtain only usernames and passwords, but active session tokens known as session cookies that the real websites associate with logged-in accounts. These session cookies can be placed inside a browser to access the accounts they're associated with directly without the need to authenticate. This proxy-based technique is not new and has been known for a long time, but setting up such an attack required technical knowledge and involved configuring multiple independent tools such as the NGINX web server to run as reverse-proxy. Then the attacker needed to manually abuse the stolen session cookies before they expire. Furthermore, some websites use technologies like Subresource Integrity (SRI) and Content Security Policy (CSP) to prevent proxying, and some even block automated browsers based on headers.


On the Frontier of an Evolving IT Workscape: What's Ahead for IT Work

People involved in the buying and selling of IT skills are skeptical that the talent pool emerging from four-year universities, business schools and community colleges will provide the skills that enterprises need to prosper. Only 16% of our respondents in large enterprises and 20% of those in midmarket enterprises believe they'll find the necessary skills from these graduates. And only a third of Habitat respondents in large enterprises and half of those in midmarket organizations believe that paying staff well will enable them to acquire the necessary IT expertise. Damien Bean, a former corporate IT vice president at Hilton Hotels Corp. and founder of CareerCurrency LLC, envisions service providers, not educators, playing an expanded role in getting IT work done. "My hypothesis is that the bottom half of the entire portfolio will move to a service model in the next 10 years," he says. "The hidden parts of this equation are demographics and outsourcing. A lot of the newest and most challenging projects are being built partly or solely offshore."


Network monitoring in the hybrid cloud/multi-cloud era

clouds binoculars network monitoring future it looking horizon vision
Most newer vendors will have a good API, he adds. Older ones might be slower to open up APIs to customers because they consider the data they produce with their analytics to be proprietary. “Infrastructure teams may have an advantage with some of the legacy tools that they currently have that are expanding into cloud-native environments,” Laliberte says. Tool sets like Riverbed, which integrates SNMP polling, flow and packet capture to get an enterprise network view of performance in hybrid cloud environments, and SolarWinds advanced network monitoring for on-premises, hybrid, and cloud, “give the opportunity to tie in both the legacy and cloud” monitoring, he adds. ... Whether we call it hybrid, cloud or SD networking, the future of networking is software defined – with distributed rather than centralized intelligence or control,” Siegfried says. “The same automation philosophy, infrastructure and code techniques that have disrupted other areas of infrastructure management are applying to networking as well.


Surviving and thriving in year three as a chief data officer

Data and analytics projects can be classified as either defense or offense (in the immortal words of Tom Davenport). Data defense seeks to resolve issues, improve efficiency or mitigate risks. Data quality, security, privacy, governance, compliance – these are all critically important endeavors, but they are often viewed as tactical, not strategic. The only time that data defense is discussed at the C level is when something goes wrong. Data offense expands top line revenue, builds the brand, grows the company and in general puts points on the board. Using data analytics to help marketing and sales is data offense. Companies may acknowledge the importance of defense, but they care passionately about offense and focus on it daily. The challenge for a CDO or CAO is that data defense is hard. A company’s shortcomings in governance, security, privacy, or compliance may be glaringly obvious. In some cases, new regulations like GDPR scream for attention. Data defense has a way of consuming more than its fair share of the attention and staff.



Quote for the day:


"Dont be afraid to stand for what you believe in, even if that means standing alone." -- Unknown


Daily Tech Digest - June 04, 2019

What the Future of Fintech Looks Like

What the Future of Fintech Looks Like
Fintech has been driving huge changes across the financial services sector, but one area that is seeing exponential change is in the ultra-high net-worth individual (UHNWI) space. Crealogix Group, a global market leader in digital banking, has been working with banks across the world on their digital transformation journey for over 20 years, and it is only recently that they are seeing growing momentum in private wealth to digitize. Pascal Wengi, the AsiaPacific managing director of Crealogix, says: “The old ways of servicing these clients through a personal touch is quickly moving to digitally-led platforms, with younger, techsavvy UHNWIs wanting an immediate and comprehensive view of their assets without waiting for a phone call. At the same time, they also want customized solutions catered to their unique financial needs.” Platforms that allow access on both sides—clients, and their advisors, family office teams and accountants..., insists Wengi.



data gravity 1000x630
Data gravity is a metaphor introduced into the IT lexicon by a software engineer named Dave McCrory in a 2010 blog post.1 The idea is that data and applications are attracted to each other, similar to the attraction between objects that is explained by the Law of Gravity. In the current Enterprise Data Analytics context, as datasets grow larger and larger, they become harder and harder to move. So, the data stays put. It’s the gravity — and other things that are attracted to the data, like applications and processing power — that moves to where the data resides. Digital transformation within enterprises — including IT transformation, mobile devices and Internet of things — is creating enormous volumes of data that are all but unmanageable with conventional approaches to analytics. Typically, data analytics platforms and applications live in their own hardware + software stacks, and the data they use resides in direct-attached storage (DAS). Analytics platforms — such as Splunk, Hadoop and TensorFlow — like to own the data. So, data migration becomes a precursor to running analytics.


5 requirements for success with DataOps strategies

For organization who operate at this speed of change, they require modern data architectures that allow for the quick use of the ever-expanding volumes of data. These infrastructures – based on hybrid and multi-cloud for greater efficiency – provide enterprises with the agility they need to compete more effectively, improve customer satisfaction and increase operational efficiencies. When the DataOps methodology is part of these architectures, companies are empowered to support real-time data analytics and collaborative data management approaches while easing the many frustrations associated with access to analytics-ready data. DataOps is a verb not a noun, it is something you do, not something you buy. It is a discipline that involves people, processes and enabling technology. However, as organizations shift to modern analytics and data management platforms in the cloud, you should also take a hard look at your legacy integration technology to make sure that it can support the key DataOps principles that will accelerate time to insight.



An API architect typically performs a high-level project management role within a software development team or organization. Their responsibilities can be extensive and diverse, and a good API architect must combine advanced technical skills with business knowledge and a focus on communication and collaboration. There are often simultaneous API projects, and the API architect must direct the entire portfolio. API architects are planners more than coders. They create and maintain technology roadmaps that align with business needs. For example, an API architect should establish a reference architecture for the organization's service offerings, outlining each one and describing how they work. The architect should define the API's features, as well as its expected security setup, scalability and monetization. The API architect sets best practices, standards and metrics for API use, as well. These guidelines should evolve as mistakes become clear and better options emerge.



Edge-based caching and blockchain-nodes speed up data transmission

Edge-based caching and blockchain-nodes speed up data transmission
Data caches are around now, but Bluzelle claims its system, written in C++ and available on Linux and Docker containers, among other platforms, is faster than others. It further says that if its system and a more traditional cache would connect to the same MySQL database in Virginia, say, their users will get the data three to 16 times faster than a traditional “non-edge-caching” network. Write updates to all Bluzelle nodes around the world takes 875 milliseconds (ms), it says. The company has been concentrating its efforts on gaming, and with a test setup in Virginia, it says it was able to deliver data 33 times faster—at 22ms to Singapore—than a normal, cloud-based data cache. That traditional cache (located near the database) took 727ms in the Bluzelle-published test. In a test to Ireland, it claims 16ms over 223ms using a traditional cache. An algorithm is partly the reason for the gains, the company explains. It “allows the nodes to make decisions and take actions without the need for masternodes,” the company says. Masternodes are the server-like parts of blockchain systems.


Microsoft's Vision For Decentralized Identity

Our digital and physical lives are increasingly linked to the apps, services, and devices we use to access a rich set of experiences. This digital transformation allows us to interact with hundreds of companies and thousands of other users in ways that were previously unimaginable. But identity data has too often been exposed in breaches, affecting our social, professional, and financial lives. Microsoft believes that there’s a better way. Every person has a right to an identity that they own and control, one that securely stores elements of their digital identity and preserves privacy. This whitepaper explains how we are joining hands with a diverse community to build an open, trustworthy, interoperable, and standards-based Decentralized Identity (DID) solution for individuals and organizations. Today we use our digital identity at work, at home, and across every app, service, and device we engage with. It’s made up of everything we say, do, and experience in our lives—purchasing tickets for an event, checking into a hotel, or even ordering lunch. 


Your 3-minute guide to serverless success
What has propelled the use of serverless? Faster deployment, the simplification and automation of cloudops (also known as “no ops” and “some ops”), integration with emerging devops processes, and some cost advantages. That said, most people who want to use serverless don’t understand how to do it. Many think that you can take traditional on-premises applications and deem them serverless with the drag of a mouse. The reality is much more complex.  Indeed, serverless application development is more likely a fit for net new applications. Even then you need to consider a few things, mainly that you need to design for serverless. Just as you should design for containers and other execution architectures that are optimized by specific design patterns, serverless is no exception. ... The trick to building and deploying applications on serverless systems is understanding what serverless is and how to take full advantage. We have a tendency to apply all of our application architecture experience to all type of development technologies, and that will lead to inefficient use of the technology, which won’t produce the ROI expected—or worse, negative ROI, which is becoming common.


Author Q&A: Chief Joy Officer

Change is hard. We get used to the way we work and we assume it’s just the way it has to be. Inertia is a big deal. Many of us have tried to make changes in our personal life—our health, our financial situation—only to find out we’re stuck in a rut. We know we need to change our behaviors in order to change our outcomes, but changing human behavior is hard. What probably prevents change more than anything is success. If you’re successful enough, then it’s hard to be convinced of the value of change. You’ll say, well, why should we change when we’re already successful? Of course the problem with success is that it is often fleeting. It’s not like you reach a level of success and then automatically stay there. Every organization, every market, and every business ebbs and flows. When it’s flowing awesomely, we figure we don’t need to change. But when it’s ebbing, we get scared—and sometimes that’s the least opportune time to make a change, because fear can cloud our ability to make the best decisions for our organizations or our teams.


Discover practical serverless architecture use cases


A more complete serverless architecture-based system comes into play with the workloads related to video and picture analysis. In this example, serverless computing enables an as-needed workflow to spin up out of a continuous process, and the event-based trigger pulls in an AI service: Images are captured and analyzed on a standard IaaS environment, with events triggering the use of Amazon Rekognition or a similar service to carry out facial recognition when needed. The New York Times used such an approach to create its facial recognition system that used public cameras around New York's Bryant Park. Software teams can also use serverless designs to aid technical security enforcement. Event logs from any device on a user's platform can create triggers that send a command into a serverless environment. The setup kicks off code to identify the root cause for the logged event or a machine learning- or AI-based analysis of the situation on the device. This information, in turn, can trigger what steps to take to rectify issues and protect the overall systems.


It’s time for the IoT to 'optimize for trust'

The research by cloud-based security provider Zscaler found that about 91.5 percent of transactions by internet of things devices took place over plaintext, while 8.5 percent were encrypted with SSL. That means if attackers could intercept the unencrypted traffic, they’d be able to read it and possibly alter it, then deliver it as if it had not been changed. Researchers looked through one month’s worth of enterprise traffic traversing Zscaler’s cloud seeking the digital footprints of IoT devices. It found and analyzed 56 million IoT-device transactions over that time, and identified the type of devices, protocols they used, the servers they communicated with, how often communication went in and out and general IoT traffic patterns. The team tried to find out which devices generate the most traffic and the threats they face. It discovered that 1,015 organizations had at least one IoT device. The most common devices were set-top boxes (52 percent), then smart TVs (17 percent), wearables (8 percent), data-collection terminals (8 percent), printers (7 percent), IP cameras and phones (5 percent) and medical devices (1 percent).



Quote for the day:


"The ability to continuously simplify, while adding more value and removing clutter, is a superpower." -- @ValaAfshar


Daily Tech Digest - June 03, 2019

Cloud computing could look quite different in a few years


Everything may run on the cloud, but running multiple clouds at the same time can still pose challenges, such as compliance with data regulation. Slack — the fastest-growing Software-as-a-Service company on the planet — has already shown how integration can work, and its success is reflected in its trial-to-paid conversion rate, which stands at 30 percent. Slack integrates with other apps such as Trello, Giphy, and Simple Poll so users can access all of them from a single platform. This is something we’ll see increasingly in cloud computing as players large and small look to help businesses and individuals become more efficient and productive. As more and more of life happens in the cloud, the term “cloud” could disappear altogether (and companies like mine, with “cloud” in their name may need to rethink their branding). What we now call “cloud computing” will simply be “computing.” And maybe, by extension, “as-a-Service” will disappear, too, as SaaS replaces traditional software. In tech, you can never be certain of the direction of travel. Things change quickly and in unexpected ways, and some of the changes we’ve seen over even the past 10 years would have been inconceivable just a few years before.



Diversifying the high-tech talent pool

Entrepreneurs always find a way. I’ve never considered being a woman or a Latina to be an obstacle. In fact, I usually consider it to be quite an asset, in part due to the incredible entrepreneurial culture of the Hispanic community in general and my family in particular. There are so many challenges to starting your own business at 25 years old, including insufficient access to affordable capital, top talent, and customers. These obstacles can be overcome only through consistent growth; that in turn can be accomplished only by consistently reinvesting back into Pinnacle. In many ways, everything we have achieved has only been made possible by the simple philosophy of investing back into the business, which is a message I share with other entrepreneurs every chance I get. ... The successful firms — Pinnacle included — have embraced these technologies and adapted their business models and service offerings accordingly. Others have chosen to sell, resulting in our industry consolidating somewhat over the years. No matter what, the one thing we will always be able to count on is change, so we’re making the investments today to be ready for tomorrow.


What are edge computing challenges for the network?


In the ongoing back and forth between centralized and decentralized IT, we are beginning to see the limitations of a centralized IT that relies on hundreds or thousands of industry standard servers running a host of applications in consolidated data centers. New types of workloads, distributed computing and the advent of IoT have fueled the rise of edge computing. ... When compute resources and applications are centralized in a data center, enterprises can standardize both technical security and physical security. It's possible to build a wall around the resources for easier security. But edge computing forces businesses to grapple with enforcing the same network security models and the physical security parameters for more remote servers. The challenge is the security footprint and traffic patterns are all over the place. ... The need for edge computing typically emerges because disparate locations are collecting large amounts of data. Enterprises need an overall data protection strategy that can comprehend all this data.


Empowering robotic process automation with a bot development framework

The bot development framework is a methodology, which standardizes bot development throughout the organization. It is a template or skeleton providing generic functionality and can be selectively changed by additional user-written code. It adheres to the design and development guidelines defined by the Center of Excellence (CoE), performs testing, and provides application access. This speeds up the development process and makes it simple and convenient enough for business units to create bots with no or minimum help from RPA team. It helps saving time in development, testing, building, deploying and execution. ... Define frequently changing variables in a central configuration fileCommon data such as application URLs, orchestrator queue names, maximum retry numbers, timeout values, asset names, etc. are prone to updates frequently. It is recommended to create a “configuration file” to store these data in a centralized location. This will increase process efficiency by saving the time needed to access multiple applications.


Experts: Enterprise IoT enters the mass-adoption phase

IoT > Internet of Things > network of connected devices
That’s not to imply that there aren’t still huge tasks facing both companies trying to implement their own IoT frameworks and the creators of the technology underpinning them. For one thing, IoT tech requires a huge array of different sets of specialized knowledge. “That means partnerships, because you need an expert in your [vertical] area to know what you’re looking for, you need an expert in communications, and you might need a systems integrator,” said Trickey. Phil Beecher, the president and CEO of the Wi-SUN Alliance (the acronym stands for Smart Ubiquitous Networks, and the group is heavily focused on IoT for the utility sector), concurred with that, arguing that broad ecosystems of different technologies and different partners would be needed. “There’s no one technology that’s going to solve all these problems, no matter how much some parties might push it,” he said. One of the central problems – IoT security – is particularly dear to Beecher’s heart, given the consequences of successful hacks of the electrical grid or other utilities.



What does Arm's new N1 architecture mean for Windows servers?

arm-neoverse-n1-architecture.jpg
The AWS A1 Arm instances are for scale-out workloads like microservices, web hosting and apps written in Ruby and Python. Like Cloudflare's workloads, those are tasks that benefit from the massive parallelisation and high memory bandwidth that Arm provides. Inside Azure, Windows Server on Arm is running not virtual machines — because emulating x86 trades off performance for low power — but highly parallel PaaS workloads like Bing search index generation, storage and big data processing. For the first time, an Arm-based supercomputer (built by HPE with Marvell ThunderX2 processors) is on the list of the top 500 systems in HPC — another highly parallel workload. And the next-generation Arm Neoverse N1 architecture is designed specifically for servers and infrastructure. Part of that is Arm delivering a whole server processor reference design, not just a CPU spec, making it easier to build N1 servers. The first products based on N1 should be available in late 2019 or early 2020, with a second generation following in late 2020 or early 2021.


The World Economic Forum wants to develop global rules for AI


The issue is of paramount importance given the current geopolitical winds. AI is widely viewed as critical to national competitiveness and geopolitical advantage. The effort to find common ground is also important considering the way technology is driving a wedge between countries, especially the United States and its big economic rival, China. “Many see AI through the lens of economic and geopolitical competition,” says Michael Sellitto, deputy director of the Stanford Institute for Human-Centered AI. “[They] tend to create barriers that preserve their perceived strategic advantages, in access to data or research, for example.” A number of nations have announced AI plans that promise to prioritize funding, development, and application of the technology. But efforts to build consensus on how AI should be governed have been limited. This April, the EU released guidelines for the ethical use of AI. The Organisation for Economic Co-operation and Development (OECD), a coalition of countries dedicated to promoting democracy and economic development, this month announced a set of AI principles built upon its own objectives.


Data Architect's Guide to Containers

Data Architect's Guide to Containers: How, When, and Why to Use Containers with Analytics
From the perspective of the analyst or data scientist, containers are valuable for a number of reasons. For one thing, container virtualization has the potential to substantively transform the means by which data is created, exchanged, and consumed in self-service discovery, data science, and other practices. The container model permits an analyst to share not only the results of her analysis, but the data, transformations, models, etc. she used to produce it. Should the analyst wish to share this work with her colleagues, she could, within certain limits, encapsulate what she’s done in a container. In addition to this, containers claim to confer several other distinct advantages—not least of which is a consonance with DataOps, DevOps and similar continuous software delivery practices—that I will explore in this series. To get a sense of what is different and valuable about containers, let’s look more closely at some of the other differences between containers, VMs, and related modes of virtualization. ... Unlike a VM image, the ideal container does not have an existence independent of its execution. It is, rather, quintessentially disposable in the sense that it is compiled at run time from two or more layers, each of which is instantiated in an image. Conceptually, these “layers” could be thought of as analogous to, e.g., Photoshop layers: by superimposing a myriad of layers, one on top of the other, an artist or designer can create a rich final image.


Business leaders failing to address cyber threats 


Despite this, the majority (71%) of the C-suite concede that they have gaps in their knowledge when it comes to some of the main cyber threats facing businesses today. This includes malware(78%), despite the fact that 70% of businesses admit they have found malware hidden on their networks for an unknown period of time. When a security breach does happen, in the majority of businesses surveyed, it is first reported to the security team (70%) or the executive/senior management team (61%). In less than half of cases is it reported to the board (40%). This is unsurprising, the report said, in light of the fact that one-third of CEOs state that they would terminate the contract of those responsible for a data breach. The report also reveals the only half of CISOs say they feel valued by the rest of the executive team from a revenue and brand protection standpoint, while nearly a fifth (18%) of more than 400 CISOs questioned in a separate poll say they believe the board is indifferent to the security team or actually sees them as an inconvenience.


Executive's guide to prescriptive analytics


Any data that creates a picture of the present can be used to create a descriptive model. Common types of data are customer feedback, budget reports, sales numbers, and other information that allows an analyst to paint a picture of the present using data about the past. A thorough and complete descriptive model can then be used in predictive analysis to build a model of what's likely to happen in the future if the organization's current course is maintained without any change. Predictive models are built using machine learning and artificial intelligence, and take into account any potential variables used in a descriptive model. Like a descriptive analysis, a predictive model can be as broadly or as narrowly focused as a business needs it to be. Predictive models are useful, but they aren't designed to do anything outside of predicting current trends into the future. That's where prescriptive analytics comes in. A good prescriptive model will account for all potential data points that can alter the course of business, make changes to those variables, and build a model of what's likely to happen if those changes are made.



Quote for the day:


"The question isn't who is going to let me; it's who is going to stop me." -- Ayn Rand


Daily Tech Digest - June 02, 2019

The future of system architecture

sdn software defined network architecture
So far, the primary effect of any API-first mandate has been to make developers ensure they document their APIs and publicize them. But a major thrust of the Amazon API-first mandate was to reduce the costs incurred from developing duplicate capabilities in multiple systems. Because most enterprises do not update all their systems every few years, any API-first mandate will take time to show real effects in the enterprise. But over time, those effects will make themselves felt, especially when an API-first mandate is combined with a reuse-before-build mandate that requires system developers to reuse capabilities available in the enterprise before building new ones. As more systems make their capabilities available through APIs, and development teams are tasked to reuse before building, there will come a point at which building new systems is replaced by recomposing existing capabilities into new capabilities. The amount of duplication across systems with widely varying purposes is surprising. Most systems need a way to store and retrieve data. Most systems need a way to authenticate and authorize users. Most systems need the ability to display text and render graphics.



Is this the future of retail? 7-Eleven launches checkout-free store

Australia’s largest convenience retailer is making a move on checkout-free, launching a “cashless and cardless” concept store in Richmond, Melbourne today. The store will allow customers to pair their cards with a smartphone app, scan items with their cameras, and then walk out. It’s a similar system to the one trialled by Woolworths in Sydney last year and follows the success of Amazon’s no-checkout grocery stores in the US. 7-Eleven chief executive Angus McKay said he’s on a mission to push the envelope on convenience retailing. “We’re trying to push the notion of ‘convenience’ to its absolute limit,” McKay said in a statement circulated on Wednesday morning. “In the new concept store, customers will notice the absence of a counter. The store feels more spacious and customers avoid being funnelled to a checkout location creating a frictionless in-store experience,” he said. The announcement follows a trial run out of an Exhibition Street store in Melbourne, although 7-Eleven hasn’t detailed plans for any further expansion of the concept as yet.


How to Move Beyond a Monolithic Data Lake to a Distributed Data Mesh


As more data becomes ubiquitously available, the ability to consume it all and harmonize it in one place under the control of one platform diminishes. Imagine just in the domain of 'customer information', there are an increasing number of sources inside and outside of the boundaries of the organization that provide information about the existing and potential customers. The assumption that we need to ingest and store the data in one place to get value from diverse set of sources is going to constrain our ability to respond to proliferation of data sources. I recognize the need for data users such as data scientists and analysts to process a diverse set of datasets with low overhead, as well as the need to separate the operational systems data usage from the data that is consumed for analytical purposes. But I propose that the existing centralized solution is not the optimal answer for large enterprises with rich domains and continuously added new sources. Organizations' need for rapid experimentation introduces a larger number of use cases for consumption of the data from the platform.


The Intersection of Innovation, Enterprise Architecture and Project Delivery

5 Questions to Ask of Enterprise Architecture
Peter Drucker famously declared “innovate or die.” But where do you start? Many companies start with campaigns and ideation. They run challenges and solicit ideas both from inside and outside their walls. Ideas are then prioritized and evaluated. Sometimes prototypes are built and tested, but what happens next? Organizations often turn to the blueprints or roadmaps generated by their enterprise architectures, IT architectures and or business process architectures for answers. They evaluate how a new idea and its supporting technology, such as service-oriented architecture (SOA) or enterprise-resource planning (ERP), fits into the broader architecture. They manage their technology portfolio by looking at their IT infrastructure needs. A lot of organizations form program management boards to evaluate ideas, initiatives and their costs. In reality, these evaluations are based on lightweight business cases without broader context. They don’t have a comprehensive understanding of what systems, processes and resources they have, what they are being used for, how much they cost, and the effects of regulations.


When algorithms mess up, the nearest human gets the blame


“While the crumple zone in a car is meant to protect the human driver,” she writes in her paper, “the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator.” Humans act like a “liability sponge,” she says, absorbing all legal and moral responsibility in algorithmic accidents no matter how little or unintentionally they are involved. This pattern offers important insight into the troubling way we speak about the liability of modern AI systems. In the immediate aftermath of the Uber accident, headlines pointed fingers at Uber, but less than a few days later, the narrative shifted to focus on the distraction of the driver. “We need to start asking who bears the risk of [tech companies’] technological experiments,” says Elish. Safety drivers and other human operators often have little power or influence over the design of the technology platforms they interact with. Yet in the current regulatory vacuum, they will continue to pay the steepest cost.


The Death of Enterprise Architecture: defeating the DevOps, microservices, ...

Current application theory says that all responsibility for software should be pushed down to the actual DevOps-style team writing, delivering, and running the software. This leaves Enterprise Architect role in the dust, effectively killing it off. In addition to this being disquieting to Enterprise Architects out there who have steep mortgage payments and other expensive hobbies, it seems to drop out the original benefits of enterprise architecture, namely oversight of all IT-related activities to make sure things both don't go wrong (e.g., with spending, poor tech choices, problematic integration, etc.) and that things, rather, go right. Michael has spoken with several Enterprise Architecture teams over on the changing nature of how Enterprise Architecture help in a DevOps- and cloud-native-driven culture. He will share their experiences including what type of Enterprise Architecture is actually needed, tactics for transitioning and when it's best to just kill off Enterprise Architecture and let the DevOps cowboys run wild.


Address goals with various enterprise architecture strategies


Enterprise architecture can also revolve around important application decisions, rather than a diagram of software stacks. In the context of software architecture, decisions include the programming language, platform, type of cloud services used, CI/CD systems involved in deployment, unit tests, the data-interchange format for the API, where the APIs are registered and related systems. For some programmers, the term architecture means a look at just the highest level of design: a set of domain objects that interrelate, such as customer, order and claim. Another view of enterprise architecture in the technical realm revolves around quality attributes. These attributes must exist for the software to work, but are unlikely to fit in a specification document. Examples include reliability, capacity, scalability and security -- even things such as uptime, measuring and monitoring levels, rollback approach, delivery cadence, time to build and time to deploy. Quality elements are not functional requirements, per se, but are ways to determine acceptable operating conditions and necessary tradeoffs to get there.


What You Need to Know about Programmable Logic Controller (PLC)


Nowadays, dedicated pieces of software have been developed for the PC in order to help with PLC programming. Once the program is written, it is then downloaded from the computer to the PLC with a special cable. In the old days, up until the mid-1990s, PLCs were programmed by using either special purpose programming terminals or proprietary programming panels. Often times, they had function keys which represented the logical elements of PLC programs. As far as storing goes, programs would get put on cassette tape cartridges. A popular form of programming is ladder logic, which is the most widely used one. It features symbols (as opposed to words) in order to emulate relay logic control, with the symbols being interconnected by lines, representing the flow of current. As the years went on, the number of symbols available has increased, thus increasing the level of functionality that PLCs have.  


Scanning the fintech landscape
Tala and Branch both seek to offer microlending over mobile devices in developing countries. The US-based companies make real-time loan decisions dynamically by using every piece of information they can gather from the customer’s mobile phone; public reports note that the companies use text messages, contacts, and hundreds of other data points to make underwriting decisions. A new set of companies are developing demographically-focused products. They segment not only from a brand and marketing perspective, but from a product innovation perspective as well. For example, True Link Financial’s elder fraud protections, Finhabits’ saving focus for Latino’s, Camino Financial’s lending for Latino-owned small and medium size businesses, or Ellevest’s product design for women all go beyond branding to design products from scratch with unique use cases and features in mind. Similarly, Brex offers cards tailored individually for startups, for ecommerce companies, and (reportedly) for other small business segments.


Five-Step Action Plan for DevOps at Scale

To give you a practical example of how these steps come together, consider the story of a large manufacturing enterprise with which we had the opportunity to work. They began their enterprise DevOps adoption with a pilot project in which they migrated their database to an AWS data lake. The project quickly showed how DevOps could create greater scalability to support the data demands of the manufacturer’s IoT applications. The manufacturer’s Center of Excellence leveraged this initial success to apply DevOps and digital transformation across company’s various departments, applying the model above to departments like enterprise architecture, application development and even business units like credit services. With the initial pilot project focused on a well-defined migration to AWS, the outcome has been the company’s agile adoption of DevOps for greater security, cost efficiencies and reliability. The idea of enterprise DevOps at scale can be daunting -- especially for large enterprises with complex systems, complicated processes and a great deal of technical debt.



Quote for the day:


"Leadership does not depend on being right." -- Ivan Illich


Daily Tech Digest - June 01, 2019


This challenge will only be amplified as the amount of data available to retailers increases: The market for retail Internet of Things (IoT) sensors, RFID tags, beacons and wearables is projected to grow 23% annually through 2025, which will generate data needed for targeted customer experiences and optimized operations. As retail consumers increasingly live and shop across multiple channels, a new strategy for analytics is needed to take advantage of all that additional data. Single data pipelines that slow learning abstraction and decision-making based on those insights are not the right fit for this new paradigm. A single data pipeline prevents analytics from delivering insights at the pace needed by line-of-business decision makers. In an SVOT world, employees often lose patience with the process and attempt do-it-yourself strategies with data. An environment where marketing, sales, demand planning, supply chain, operations and finance each apply their own tools, filters and data-modeling decisions will result in a multitude of interpretations, even if they start from the same pile of data.


Despite mounting evidence of the substantial benefits provided by analytics, most companies have barely scratched the surface of what is possible. The good news is that the tide is turning. The field is increasingly attracting new talent, who are introducing new skills such as data science and statistics to the realm of HR. This helps to further progress, as does the advance in technologies enabling real-time data collection and analysis of unstructured, as well as structured, data. Consequently, the growth of these skills is set to continue to rise exponentially. Building a people analytics function coupled with capitalizing on technologies that collect, store, and dynamically visualize data enables companies to put information at the fingertips of the business leaders to support decision-making. Moreover, this democratization of data can also help managers by providing data on their own behaviors, as well as providing them with insights that support employee engagement, development, and performance.



PCI Express 5.0 finalized, but 4.0-compatible hardware is only now shipping  

virtualizationistock-894624056natalyayudina.jpg
On its own merits, PCIe 5.0 is impressive, doubling the transfer rates from PCIe 4.0, which in turn doubled transfer rates from PCIe 3.0. In terms of practical deployments, a PCIe 5.0 x1 slot delivers the same bandwidth (~4GB/s) as a full-size, first-generation PCIe x16 slot from 2003, commonly used in graphics cards. In terms of practical deployment, it is likely to be some time before PCIe 5.0 devices arrive, though it is possible that Intel may skip PCIe 4.0 entirely, as their Compute Express Link (CXL) technology for connecting FPGA-based accelerators is based on PCIe 5.0. This should be taken with a grain of salt—rumors indicated that Intel planned to skip a 10nm manufacturing process, in favor of moving to 7nm, following low yields on 10nm parts. Intel's Computex announcements show 10nm plans for mobile systems, though desktop-class CPUs have yet to be announced. From an implementation standpoint, the technical complexity between 4.0 and 5.0 is lower than 3.0 and 4.0, making it likely to see a quick upgrade for existing 4.0 designs.


Sustainable Operations in Complex Systems With Production Excellence


Production excellence is a set of skills and practices that allow teams to be confident in their ownership of production. Production-excellence skills are often found among SRE teams or individuals with the SRE title, but it ought not be solely their domain. Closing the feedback loop on production ownership requires us to spread these skills across everyone on our teams. Under production ownership, operations become everyone's responsibility rather than “someone else's problem”. Every team member needs to have a basic fluency in operations and production excellence even if it's not their full-time focus. And teams need support when cultivating those skills and need to feel rewarded for them. There are four key elements to making a team and the service it supports perform predictably in the long term. First, teams must agree on what events improve user satisfaction and eliminate extraneous alerts for what does not. Second, they must improve their ability to explore production health, starting with symptoms of user pain rather than potential-cause-based exploration.


A Quantum Revolution Is Coming

uncaptioned
Now, individuals and entities across NGIOA are part of an entangled global system. Since the ability to generate and manipulate pairs of entangled particles is at the foundation of many quantum technologies, it is important to understand and evaluate how the principles of quantum physics translate to the survival and security of humanity. If an individual human is seen as a single atom, is our behavior guided by deterministic laws? How does individual human behavior impact the collective human species? How is an individual representative of how collective systems, whether they be economic to security-based systems, operate? Acknowledging this emerging reality, Risk Group initiated a much-needed discussion on Strategic Impact of Quantum Physics on Financial Industry with Joseph Firmage, Founder & Chairman at National Working Group on New Physics based in the United States, on Risk Roundup.


CIO interview: Sam Shah, director for digital development, NHS England


Shah believes the effective use of standards across emerging technology will help break forms of supplier lock-in that have previously characterised much of the provision of NHS systems and services. To help encourage providers generate innovative solutions to business challenges in the health service, Shah says the sector needs to be a more attractive place for IT suppliers. “We’re keen to help – we want to generate grants to help innovators in the UK work in partnership with the NHS,” he says. “We have an entire network of academics and scientists that support our work. And we have a much more open approach to development, so that suppliers can start working with the NHS in a more meaningful way. “As we amass more data and connect more datasets, we have an opportunity to bring about precision public health to reduce inequalities and to reduce the burden on society. We can create precision medicine that allows clinicians to prescribe much more precisely around the needs of the patient and their optimal needs. Our world is becoming more data-driven, but we need help from suppliers to deliver these services.”



Put simply, location intelligence is the ability to derive business insights from geospatial information. Those with well-developed location intelligence abilities use GIS, maps, data, and analytical skills to solve real-world problems, specifically business problems. This is an important distinction. Location intelligence is primarily a business term that refers to solving business problems. GIS may be the technical foundation of location intelligence, but it’s not the same thing. ... In reality, when you factor location into analysis, you open up a world of opportunity. Specifically, you make it possible to tackle a unique set of problems. Think about an offshore oil company trying to predict and monitor sea ice activity. Rogue icebergs or shifting ice floes, driven by global climate change, pose a tremendous risk to the safe operation of offshore oil rigs and shipping vessels. Mitigation of sea ice risk is inherently about predicting and its monitoring the location of sea ice: its size, shape, and speed and the consequences if it impacts an oil platform.


European Union Votes to Create a Huge Biometrics Database


The identity records will include names, dates of birth, passport numbers, and other ID information. The biometrics details meanwhile include the fingerprints and facial scans. The primary aim of the biometric database is to make it easier for EU border and law enforcement personnel to search for people’s information faster. This is an upgrade to the current system of going through different databases when looking for information. The interoperability of the CIR will ensure that the law enforcement officers have fast, seamless, systematic and controlled access to the information that they need to perform their tasks. It would also detect multiple identities linked to the same set of biometric data and facilitate identity checks of third-country nationals (TCNs), on the territory of a Member State, by police authorities. The CIR for third-country citizens would enable identification of TCNs that lack proper travel documents. 


uncaptioned
With regards to a blockchain platform that offers a space for content creators to go about their business unheeded, there is a lot of potential, and already some use-cases of a decentralised content platform that has an incentivisation program already attached. Many are aware of Steemit, within the blockchain sphere, which is a blogging and social networking website that uses the Steem blockchain to reward publishers and curators. It is a useful service as because of its decentralised nature, there should be no censorship - but that is in question because there is still Steemit Inc heading up the entire operation. But in principle, a fully decentralised content platform allows for free reign regarding posting, and because of the token economy associated with it, there is monetisation, as well as crowd sentiment driving the content. Many will worry about hate speech and other dangers being pronounced on these decentralised platforms, but in quite a libertarian viewpoint, this will only be as successful as the demand for it.


WebAssembly and Blazor: A Decades Old Problem Solved

In mid-April 2019, Microsoft gently nudged a young framework from the "anything is possible" experimental phase to a "we're committed to making this happen" preview. The framework, named Blazor because it runs in the browser and leverages a templating system or "view engine" called Razor, enables the scenario .NET developers almost gave up on. It doesn't just allow developers to build client-side code with C# (no JavaScript required), but also allows developers to run existing .NET Standard DLLs in the browser without a plugin. ... HTML5 and JavaScript continued to win the hearts and minds of web developers. Tools like jQuery normalized the DOM and made it easier to build multi-browser applications, while at the same time browser engines started to adopt a common DOM standard to make it easier to build once and run everywhere. An explosion of front-end frameworks like Angular, React, and Vue.js brought Single Page Applications (SPA) mainstream and cemented JavaScript as the language of choice for the browser operating system.



Quote for the day:


"Great spirits have always encountered violent opposition from mediocre minds." -- Albert Einstein