Daily Tech Digest - April 11, 2019

Build up a DevSecOps pipeline for fast and safe code delivery

Developers in need of a feature -- or simply in a rush -- might pull random Docker images that contain vulnerabilities from public internet repositories. Developers should always treat public container registries with extreme caution. Registry platforms, such as Harbor or even a self-hosted private local registry for the company, offer tighter control over what users deploy within an environment, and also streamline versioning and management to ensure DevSecOps doesn't impede code velocity. Docker Hub also offers certified images, but always exercise vigilance to minimize risk. Other tools ensure code builds don't ship with known vulnerabilities. For example, to prevent the release of software with vulnerable libraries, the auditing tool Open Security Content Automation Protocol (OpenSCAP) scans systems in the delivery pipeline and checks them against the Common Vulnerabilities and Exposures (CVE) library. There are several CVE feeds, both free and paid, that OpenSCAP can use. 


Upgrading from Java 8 to Java 12

Generally speaking each new release performs better than the previous one. "Better" can take many forms, but in recent releases we've seen improvements in startup time; reduction in memory usage; and the use of specific CPU instructions resulting in code that uses fewer CPU cycles, among other things. Java 9, 10, 11 and 12 all came with significant changes and improvements to Garbage Collection, including changing the default Garbage Collector to G1, improvements to G1, and three new experimental Garbage Collectors (Epsilon and ZGC in Java 11, and Shenandoah in Java 12). Three might seem like overkill, but each collector optimises for different use cases, so now you have a choice of modern garbage collectors, one of which may have the profile that best suits your application. The improvements in recent versions of Java could lead to a cost reduction. Not least of all tools like JLink reducing the size of the artifact we deploy, and recent improvements in memory usage could, for example, decrease our cloud computing costs.


Atlassian targets agile development at scale with Jira Align

Agile Development
“There are a lot of benefits [with agile development]: it allows enterprises to be more nimble, to respond quickly to pressure and change to their roadmaps as needed based on customer demands,” he said. “But one thing that we have lost through that transformation is the certainty, the visibility and the clarity across an organization of when deadlines will be hit, and when capabilities will be available for customers. Agile simply just doesn't work that way. And that is a big challenge, especially for these very large organizations that need to be more nimble. “Our biggest customers were looking for guidance for how to scale all of this agile development goodness across thousands of people,” Deatsch said. “That is actually exactly what AgileCraft does.” As an example, Deatsch said a large bank could be building a new mobile app, an effort that could involve large numbers of developers working on individual, but related, projects – from building a front-end UI to back-end transactions systems.


Cube.js: Ultimate Guide to the Open-Source Dashboard Framework

Image title
The majority of modern web applications are built as a single-page application, where the front-end is separated from the back-end. The back-end also usually is split into multiple services, following a microservices architecture. Cube.js embraces this approach. Conventionally, you run Cube.js back-end as a service. It manages the connection to your database, including queries queue, caching, pre-aggregation, and more. It also exposes an API for your front-end app to build dashboards and other analytics features. ... Analytics starts with the data and data resides in a database. That is the first thing we need to have in place. You most likely already have a database for your application, and usually, it is just fine to use for analytics. Modern popular databases such as Postgres or MySQL are well suited for a simple analytical workload. By simple, I mean a data volume with less than 1 billion rows. MongoDB is fine as well; the only thing you’ll need to add is MongoDB Connector for BI. It allows executing SQL code on top of your MongoDB data. It is free and can be easily downloaded from the MongoDB website. One more thing to keep in mind is replication. It is considered bad practice to run analytics queries against your production database mostly because of the performance issues.


Finance Remains Most Attacked Sector Globally Six of the Past Seven Years

John South of the Threat Intelligence Communication Team, Global Threat Intelligence Center at NTT Security, says: Finance is yet again on the top spot when it comes to targeted attacks, which surely is enough evidence to convince the board that cybersecurity is a must-have investment. Many financial organizations are moving forward with digital transformation but without prioritizing security as a core business requirement. While legacy methods and tools are still effective at providing a solid foundation for mitigation, new attack methods are continually being developed by malicious actors. Security leaders should ensure basic controls remain a primary focus but they must also embrace innovative solutions if they provide a good fit and true value. Mr. Fumitaka Takeuchi, Security Evangelist, Vice President, Managed Security Service Taskforce, Corporate Planning at NTT Communications, says: Many organizations are caught up in simply buying solutions to problems that either dont really exist, or a solution which costs more than the potential loss being prevented.


Why Xamarin


Even after years of building for mobile, developers still heavily debate the choice of technology stack. Perhaps there isn't a silver bullet and mobile strategy really depends - on app, developer expertise, code base maintenance and a variety of other factors. If developers want to write .NET however, Xamarin has essentially democratized cross-platform mobile development with polished tools and service integrations. Why are we then second guessing ourselves? Turns out, mobile development does not happen in silos and all mobile-facing technology platforms have evolved a lot. It always makes sense to look around at what other development is happening around your app - does the chosen stack lend itself to code sharing? Are the tools of the trade welcoming to developers irrespective of their OS platform? Do the underlying pillars of chosen technology inspire confidence? Let's take a deeper look and justify the Xamarin technology stack. Spoiler - you won't be disappointed. 


LambdaTest Selenium Testing Tool Tutorial with Examples in 2019

Image 1 for LambdaTest Selenium Testing Tool Tutorial with Examples in 2019
LambdaTest Selenium Grid is a scalable, secure, and reliable cloud based Selenium grid. It lets you perform automated cross browser testing across all major browsers and various browser versions, latest and legacy and across operating systems. It also lets you run your multiple selenium automated tests in parallel which allows you to cut down on your build time. It also provides you with screenshots from over 2000 mobile and desktop browsers, so you can perform visual cross browser compatibility testing and there is no need to test for each browser manually as you get full paged screenshots by just selecting the configurations. ... The thing about Selenium Grid is that it can be expensive to setup additional machines as Nodes, and this is where an Online Selenium Grid (SaaS) can truly shine. They offer various packages from entry level pricing to enterprise packages. And usually, the price for cloud solutions often scale linear with the number of tests and the concurrency of tests. Which means that you can scale according to your needs and can keep the cost under control accordingly as well.


Microservices and Distributed Transactions

Figure 2: a transaction that spans two applications and two resource managers
The usage of the two-phase commit protocol has been debated a lot since its inception. On one side, the enthusiasts tried to use it in every circumstance; on the other side, the detractors avoided it in all the situations. A first note that must be reported is related to performance: with every consensus protocol, the two-phase commit increases the time spent by a transaction. This side effect can’t be avoided and it must be considered at design time. It’s even common knowledge that some resource managers are affected by scalability limits when they manage XA transactions: this behavior depends more on the quality of the implementation than on the two-phase commit protocol itself. The abuse of two-phase commit severely hurts the performance of a distributed system, but trying to avoid it when it's the obvious solution leads to baroque and over engineered systems that are difficult to maintain. More specifically, the integration of already existing services requires serious re-engineering when both the transactional behavior must be guaranteed and a consensus protocol like the two-phase commit is not used.


Craft your data stores with VM storage performance in mind


One data store is almost never enough; you usually need multiple data stores, but fewer than the number of VMs you have. Modest performance VMs can share a data store; you might put six to 12 modest VMs on a single data store, but just don't put all of one kind of VM on the same data store. You'll be in a world of hurt if you keep all your Windows domain controllers on a single data store. The data store might become saturated and slow, and if you accidentally delete it, you won't have any of the necessary controllers left. Try not to place more than 12 VMs on a data store because they all share the queues and performance of the data store, and all of them will suffer if the queues become saturated. Usually, VMs share data stores with other VMs, but high-performance VMs that need multiple disks and multiple SCSI controllers also need multiple data stores. In certain cases, a single critical VM will have its own data store and, on rare occasions, one VM might need multiple data stores.


Samsung's Agile & Lean UX Journey

The greatest strength of a designer is understanding the users, argued Jo. They tried to get designers thinking about the users, to get users to the center of the team, he said. Have teams focus on the real problem for real users in a desirable and usable product. Samsung applies several user-centered practices to develop products. From the start of the project, the team creates personas together so that they can look forward to one goal without looking at different directions. Personas are connected to all of their activities. Jo mentioned that they used them in the scenarios, storyboards, workflows, design review, and user stories. Jo explained that since their personas are added and refined based on iterative research, they become more robust and concrete with their insights. Whenever we learn something new about users, we add or change our personas, said Jo. He stated that "the important thing is that our personas must be alive and evolving more and more, like real characters."



Quote for the day:


"When your values are clear to you, making decisions becomes easier." -- Roy E. Disney


Daily Tech Digest - April 10, 2019

The report notes that phishing attacks have become the most widespread email threat to organisations around the world, with attacks keeping pace with security controls, evolving to evade detection. “For most organisations, phishing is the number one email security threat, outranking both malware and ransomware,” the report said, highlighting the finding that one in every 99 emails is a phishing attack. “Cloud-based email, despite all of its benefits, has unfortunately launched a new era of phishing attacks,” said Yoav Nathaniel, lead security analyst at Avanan. “The nature of the cloud provides more vectors for hackers and gives them broader access to critical data when a phishing attack is successful. “Organisations are in desperate need for more information on phishing attacks and how to combat these attacks. We conducted this research to help inform organisations and shed light on how to keep sophisticated attacks out of their environment,” he said. In their analysis of emails sent to Office 365, Avanan researchers scanned every email after the default security, enabling them to see the phishing attacks that were caught as well as those that were missed.


What is a password spraying attack and how does it work?


It can be a dictionary attack where you have these common passwords that people might use. What can also be used are credentials obtained through compromised websites because many people repeat passwords across multiple sites. Usually, it's a dictionary-type [attack], but taking passwords from sites that have been compromised is also a method that would be used. It [also] depends on how targeted the attack is. If they're going after a specific person, they might try to use all of the usernames associated with a given email and try all of the passwords that may have been taken from compromised sites. [They may] also try those usernames they have against a dictionary cyberattack [of common passwords] as well. It really depends on the motive of the hacker. it would be difficult to judge an attacker's level of sophistication based on whether they use a password spraying attack or not. You'd have to look at what other mechanisms were used as part of the broader attack. Are there other things that would occur?


The Citizen’s Perspective on the Use of AI in Government


Citizens generally feel positive about government use of AI, but the level of support varies widely by use case, and many remain hesitant. Citizens expressed a positive net perception of all 13 potential use cases covered in the survey, except decision making in the justice system. (See Exhibit 1.) For example, 51% of respondents disagreed with using AI to determine innocence or guilt in a criminal trial, and 46% disagreed with its use for making parole decisions. While AI can in theory reduce subjectivity in such decisions, there are still legitimate concerns about the potential for algorithmic error or bias. Furthermore, algorithms cannot truly understand the extenuating circumstances and contextual information that many people believe should be weighed as part of these decisions. The level of support is high, however, for using AI in many core government decision-making processes, such as tax and welfare administration, fraud and noncompliance monitoring, and, to a lesser extent, immigration and visa processing. Strong support emerged for less sensitive decisions such as traffic and transport optimization.


The real challenge to achieving 5G: the networks

The real challenge to achieving 5G: the networks
What are the 5G network challenges? The overriding one is producing a network core that is fully virtualized. Currently most networks are populated with equipment that has a dedicated single purpose function (e.g., switch, router, NIC, RAN). This doesn’t work well when you want to be able to change and provision new services, network connections, and software solutions. The carriers have been moving towards Network Function Virtualization (NFV) for several years. But 5G has made it imperative. Why? Services such as network slicing, NB IoT, quality-of-service offerings, intelligence at the edge, multiple radio networks/connections, etc. all require NFV. To make NFV real, operators are installing equipment that is powered not by custom fixe- function processors, but by multi-purpose programmable servers that in many ways are similar to standard application servers in use at enterprises and in the cloud. They are fully programmable and able to run applications locally as is required for new service offerings. They are fully programmable and able to run applications locally as is required for new service offerings.


Tens of thousands of cars were left exposed to thieves due to a hardcoded password


The vulnerability, tracked as CVE-2019-9493, impacts the MyCar telematics system sold by Quebec-based Automobility Distribution. ... MyCar is one of the more advanced vehicle telematics systems, providing a wealth of useful controls. According to the MyCar website, users can use the MyCar mobile apps "to pre-warm your car's cabin in the winter, pre-cool it in the summer, lock and unlock your doors, arm and disarm your vehicle's security system, open your trunk, and even find your car in a parking lot." For these reasons, the hardcoded credentials left inside the two MyCar mobile apps were a huge security flaw. According to a security alert sent out on Monday by the Carnegie Mellon University CERT Coordination Center, before the updates, any threat actor could have extracted these hardcoded credentials from the app's source code and they could have been used "in place of a user's username and password to communicate with the server endpoint for a target user's account," granting full control over any connected cars --such as locating, unlocking, and starting any connected cars.


The role of the CIO in moving from a product company to a solutions and services provider

The role of the CIO in moving from products to solutions and services image
CIOs “have three different areas of responsibility,” according to Sanghrajka. The first surrounds how CIOs (CTOs or IT leaders) can provide technology to run operations efficiently. “That is number one,” he says. The second is about enabling technology to help businesses engage with the customer better, whether it’s employees, external customers or external partners. “How do you enable a system of engagement and create stickiness, which are factors that drive revenue growth,” asks Sanghrajka? “The first factor drives efficiency and the second factor drives revenue growth and customer loyalty,” he continues. The third concerns the actual role of the CIO. They need to become more strategically focused and play a more important role in helping their business transform from a product company to a solutions and services company. “An example of that is moving from a DVD business to a streaming service,” explains Sanghrajka. To embrace this, the role of the CIO is constantly changing.


NSS Labs CTO Jason Brvenik talks security testing challenges


There is so little transparency between what the user expects and what the product delivers, and the only way to know if something's being effective is to actually try it and the only people trying to beat defenses are the attackers right now. It's about transparency and accountability, allowing the enterprise to at least know the bounds of how much trust they should put in the capabilities being fielded, and how much opportunity they have to close that gap, and to protect their users, to protect their employees, and protect their shareholders. That's a key element -- it's necessary in the industry. It's nontrivial. It's somewhat sobering that I have a very small team that I call the 'Offensive Research' team that does the net new security testing capabilities, and we've yet to meet a product that we couldn't get past. What does that tell you? Of course, no product is perfect. We can't solve all problems in the industry. We can certainly try to make it much more difficult for somebody to steal from you and take your data.


Juniper opens SD-WAN service for the cloud

Juniper opens SD-WAN service for the cloud
The service brings with it Juniper’s Contrail Service Orchestration package, which secures, automates, and runs the service life cycle across NFX Series Network Services Platforms, EX Series Ethernet Switches, SRX Series next-generation firewalls, and MX Series 5G Universal Routing Platforms. Ultimately it lets customers manage and set up SD-WANs all from a single portal. The package is also a service orchestrator for the vSRX Virtual Firewall and vMX Virtual Router, available in public cloud marketplaces such as Amazon Web Services (AWS) and Microsoft Azure, Juniper said. The SD-WAN offering also includes integration with cloud security provider ZScaler. Contrail Service Orchestration offers organizations visibility across SD-WAN, as well as branch wired and now wireless infrastructure. Monitoring and intelligent analytics offer real-time insight into network operations, allowing administrators to preempt looming threats and degradations, as well as pinpoint issues for faster recovery.


Recent Progress in Software Security 

Perhaps the most promising advance in software security involves using runtime controls that are embedded in the execution environment. This technique is sometimes called runtime application self-protection (RASP). Through the integration of behavioral and even machine-learning controls into and around an executable, a programmed protection environment emerges—one that can compensate for code weaknesses. RASP controls, cloud development, and DevOps are all tightly woven in most software development organizations. All three aim to increase delivered code’s speed and flexibility. However, a somewhat open question is whether these three initiatives result in more secure code. Certainly, RASP will reduce the risk of any application good or bad, but it’s unclear whether programmers write better code in the presence of RASP. Nevertheless, runtime software controls will continue to influence software security, especially in the context of new self-learning methods. Machine-learning techniques have advanced to the point at which observed behaviors can serve as training data to label new variants of software exploits.


Attackers Shift to Malware-Based Cryptominers

Attackers Shift to Malware-Based Cryptominers
"Since the browser is merely an application on a device, it cannot generate the same computing power as infecting the actual device," DeBeck writes. "As a result, this type of cryptojacking takes much longer to generate each coin, which may be incentivizing threat actors to refocus on malware infections to speed things up." Another incentive for the move to malware-based mining may be the halt to the Coinhive project. Coinhive's JavaScript code mined the privacy-focused currency monero. It frequently turned up on hacked websites because it could be incorporated by anyone into a website. The project proved controversial because hackers inserted it into websites without permission. The code was freely available to install, but Coinhive took a 30 percent share of mining rewards even if it was on a hacked site, which some maintained was unethical. "With Coinhive gone, threat actors would have to go to other script providers," DeBeck writes. "While there are many other providers of the same sort of scripts, the removal of Coinhive could affect the overall ability of the technically unskilled to create web-based cryptojacking attacks."



Quote for the day:


"New capabilities emerge just by virtue of having smart people with access to state-of-the-art technology." -- Robert E. Kahn


Daily Tech Digest - April 09, 2019

How to define load models for continuous testing


A realistic workload model is the core of a solid performance test. Generating load that does not reflect reality will only give you unrealistic feedback about the behavior of your system. That's why analyzing the traffic and application to generate your performance strategy is the most important task for creating your performance testing methodology. To help one of my clients build a realistic performance testing strategy, I built a program that extracts the use of its microservices in production. The objective was to present the 20% of calls that represent 80% of the production load. Through this extraction, the program guides the project in building a continuous testing methodology for the client's main microservices. One of the biggest limitations is the lack of information stored in the http: logs or the data stored in APM products. Unfortunately, there is just too much missing information to automatically generate the load testing scripts. Technically, with a tool like my prototype, you'll have everything you need to build test scripts, test definitions, and test objectives.



Data Modeling with Indexes in RavenDB

When it comes to data modeling, indexes in relational databases usually don’t enter the equation. However, in RavenDB, indexes serve not only to enhance query performance but also to perform complex aggregations using map/reduce transformations and computations over sets of data. In other words, indexes can transform data and output documents. This means they can and should be taken into account when doing data modeling for RavenDB. Index definitions are stored within your codebase and are then deployed to the database server. This allows your index definitions to be source-controlled and live side by side with your application code. It also means indexes are tied to the version of the app that is leveraging them making upgrades and maintenance easier. Indexes can be defined in C#/LINQ or JavaScript. For this article, we’ll use JavaScript to show off this feature of RavenDB. It’s worth noting that JavaScript support for indexes supports up to ECMAScript 5 but this will increase as the JavaScript runtime RavenDB uses adds support for ES2015 syntax in the near future.


How to Build a Culture Bridge to the Cloud


The DevOps culture necessary to effectively use open source, cloud native technologies has fundamentally changed software and team processes. It is expanding how we work and think. For some, this presents an exciting opportunity. Others approach it with more trepidation. Startups, in general, are on board. They don’t have entrenched technology that needs to be maintained and upgraded. They are also able to hire people whose skill sets are a good fit with newer technologies. For enterprises, it’s a bit tougher. They have massive investments in workhorse technologies and platforms such as Java and WebLogic. But they also have IT teams with deep heritage and operational knowledge in building, deploying, running and maintaining applications over decades. Understandably, their developers don’t necessarily want to become experts in infrastructure and in projects such as Kubernetes. They may not see the value in having novices muck around with it. As long as the developer and operations teams remain separate, they each have a measure of power and a measure of comfort.



Machines and devices are everywhere, connected—and multiplying. These are the “things” of the Internet of Things, and today there are nearly three devices attached to the internet for every human on the planet. By 2025 that ratio will soar to 10 to 1. For consumers, that means their thermostats and refrigerators can be connected to real-time, sophisticated analytics engines that automatically adjust them to be more efficient and save more money. But what does that mean for businesses? Well, just as it’s doing for consumers, IoT is helping businesses streamline operations, save money and time with real-time, actionable intelligence, and prevent problems with predictive analytics. But there’s a dark side to IoT. Frankly, it’s the concerning underbelly that exists in all connected technologies: lacking security. We already see massive DDoS attacks driven by IoT devices. Experts concede that is just the tip of the iceberg. In all, analysts project the global IoT market to exceed the $1 trillion mark in 2022. Today, companies in every industry rely on IoT as part of their business strategy. 


GDPR at a critical stage, says information commissioner


“We find ourselves at a critical stage. For me, the crucial, crucial change the law brought was around accountability. Accountability encapsulates everything the GDPR is about.” Denham said the GDRP enshrines in law an onus on companies to understand the risks that they create for others with their data processing, and to mitigate those risks. It also formalises the move away from box ticking to seeing data protection as something that is part of the cultural and business fabric of an organisation, and it reflects that people increasingly demand to be shown how their data is being used, and how it is being looked after, she added. However, she said this change is not yet evident in practice. “I don’t see it in the breaches reported to the ICO. I don’t see it in the cases we investigate, or in the audits we carry out,” she said. Denham said this is both a problem and an opportunity. “It’s a problem because accountability is a legal requirement, it’s not optional. But it is an opportunity because accountability allows data protection professionals to have a real impact on that cultural fabric of your organisation,” she said.


Gaming company boosts call center employee engagement


Many companies use design thinking to improve the customer experience. After finding it useful in the CX realm, businesses now try to apply similar approaches to improve employee engagement. Electronic Arts (EA) Inc. found this approach helpful to improve the engagement of call center employees who typically experience the brunt of customer complaints. "No one ever calls us when something good is happening," said Abby Eaton, manager of employee experience at EA. "They are calling because something has gone wrong and they are already frustrated, so the complexity of the advisers' jobs is challenging." Design thinking can help improve the design of a space, physical products and applications and has been a trend since the 1990s. Now, companies are applying this same approach to improve applications in the workplace -- cutting costs and improving worker productivity, said Parminder Jassal, Work and Learn Futures group director at the Institute for the Future, a think tank in Palo Alto, Calif.


Innovation Nation: Blockchain much bigger than Bitcoin

Blockchain. Photo / Storyblocks
Transparency works well for Bitcoin's blockchain but it might not suit say a large company's supply-chain system where it doesn't want suppliers and contractors to see each other's transactions. Immutability is a double-edged sword: if a fraudulent or erroneous transaction is recorded on the blockchain, there's no easy way to amend or delete it. The only way to fix that is to go back in time on the blockchain, and start again at that point to invalidate the transaction, provided everyone in the network agrees to do that. This effectively creates a new version of the software, and thus a new cryptocurrency that's not compatible with the older one. Not being able to delete or amend information could also make blockchain data stores incompatible with tightening global privacy rules that give individuals the right to "be forgotten" and have their details deleted if they so wish. Muir says we don't know the answer to that yet. Likewise, accessing blockchain data requires the use of a digital cryptographic key that has to be kept secure.


5 mistakes that doom a DevOps transformation from the start


The delivery pipeline in DevOps consists of feedback loops that allow you to inspect, reflect, and decide if you are still doing the right things in the right way. As you get better and smarter and learn more, you'll see ways to improve, to optimize, to cut out steps that are not providing value. Often those improvements require some investment and extra effort to implement. If you don't take the time to fix the pipeline when you see the ways to improve, you are just investing in a wasteful process. You are doing the process for the sake of the process, not to add the maximum value to what you are delivering. The sooner you improve, the sooner you reap the benefits of that improvement. It isn't just a matter of reviewing the process twice a year or every quarter. Continuous improvement is a cultural shift that says everyone should get better all the time. Every time you go through the process, you get a little better and learn a little more.


A Glimpse into WebAssembly


One of the biggest features WebAssembly has been touting is performance. While the overall performance is trending to be faster than JavaScript, the function-to-function comparison shows that JavaScript is still comparable in some benchmarks, so your mileage may vary. When comparing function execution time, WebAssembly is predicted to be about 20-30% faster than JavaScript, which is not as much as it sounds since JavaScript is heavily optimized. At this time, the function performance of WebAssembly is roughly about the same or even a little worse than JavaScript — which has deflated my hopes in this arena. Since WebAssembly is a relatively new technology, there are probably a few security exploits waiting to be found. For example, there are already some articles around exploiting type checking and control flow within WebAssembly. Also, since WebAssembly runs in a sandbox, it was susceptible to Spectre and Meltdown CPU exploits, but it was mitigated by some browser patches. Going forward, there will be new exploits. Also, if you are supporting enterprise clients using IE or other older browsers, then you should lean away from WebAssembly.


Is Hadoop’s legacy in the cloud?

Is Hadoop̢۪s legacy in the cloud? image
What many people failed to realise is that Hadoop itself is more of a framework than a big data solution. Plus, with its broad ecosystem of complementary open source projects for most businesses Hadoop was too complicated. It needed a level of configuration and programming knowledge that could only be supplied by a dedicated team to fully leverage it. Even when there was a dedicated internal team, it sometimes needed something extra. For instance, one of Exasol’s clients, King Digital Entertainment, makers of the Candy Crush series of games, couldn’t get the most out of Hadoop. It wasn’t quick enough for interactive BI queries that the internal data science team demanded. They needed an accelerator on a multi-petabyte Hadoop cluster which allowed their data scientists to interactively query the data. The world of data warehousing has changed in recent years, and Hadoop has had to adapt. The IT infrastructure of 2009-2013, when Hadoop was at the peak of its fame, differs greatly from the IT infrastructure of today.



Quote for the day:


"Leaders need to strike a balance between action and patience." -- Doug Smith


Daily Tech Digest - April 08, 2019

Node.js vs. Java: An epic battle for developer mind share

Node.js vs. Java: An epic battle for developer mind share
For all its success, though, Java never established much traction on the desktop or in the browser. People touted the power of applets and Java-based tools, but gunk always glitched up these combinations. Servers became Java’s sweet spot. Meanwhile, what programmers initially mistook as the dumb twin has come into its own. Sure, JavaScript tagged along for a few years as HTML and the web pulled a Borg on the world. But that changed with AJAX. Suddenly, the dumb twin had power. Then Node.js was spawned, turning developers’ heads with its speed. Not only was JavaScript faster on the server than anyone had expected, but it was often faster than Java and other options. Its steady diet of small, quick, endless requests for data have since made Node.js more common, as webpages have grown more dynamic. While it may have been unthinkable 20 years ago, the quasi-twins are now locked in a battle for control of the programming world. On one side are the deep foundations of solid engineering and architecture. On the other side are simplicity and ubiquity.



Why you need to align your cloud strategy to business goals

The aim here is to make sure security is supporting the business objectives and strategy. Every department in the hybrid model, in turn, decides how their security efforts contribute to the overall risk reduction and better security posture. This means setting a baseline of security controls, communicating this to all business units, and then gradually rolling out training, updating policies and setting risk, assurance and audit processes to match. While developing this baseline, however, input from various departments should be considered, as it is essential to ensure adoption. When an overall control framework is developed, departments are asked to come up with a specific set of controls that meet their business requirements and take distinctive business unit characteristics into account. This should be followed up by gap assessment, understanding potential inconsistencies with the baseline framework.


Gamification: Understanding The Basics


Games make us happy because they are hard work that we choose for ourselves. And it turns out that nothing makes us happy than good and hard work. We don’t normally, think of games as hard work. After all, we play games, and we have been taught of play as the opposite. But nothing could be further from truth. ... A game is an opportunity to focus our energy on something better. On something that will make us better. On something we are good at, or getting better at and enjoy. As mentioned above, a gameplay is the opposite of depression. And that’s why so many games are addictive. Because they are able to boost our positive thinking that we are capable of doing and achieving something. When we’re in a state of optimistic engagement, it suddenly becomes biologically more possible for us to think positive thoughts. ... real-world hard work isn’t hard enough. You read it right. We become bored and feel underutilised. And this happens specifically in bigger companies where you feel that you don’t make a big impact by doing your small work. This is one of steps from Maslow’s hierarchy — feeling appreciated for what you do.


Critical infrastructure under relentless cyber attack

“Nation-state attacks are especially concerning in the OT sector because they’re typically conducted by well-funded, highly capable cyber criminals and are aimed at critical infrastructure,” the report said. The report is based on the analyses of responses from 701 representatives of the US, UK, Germany, Australia, Mexico and Japan working in industries that rely on industrial control systems (ICS) and other forms of OT. The report revealed that cyber attacks are relentless and continuous against OT environments. Most organisations in the OT sector have experienced multiple cyber attacks causing data breaches and/or significant disruption and downtime to business operations, plants and operational equipment, with many being hit by nation-state attacks, the report said. The finding showed cyber attacks are having an effect on physical systems, according to Eitan Goldstein, senior director, strategic initiatives at Tenable. “That is a really big change and that’s why the risk isn’t just theoretical anymore,” he told the BBC


5 Cybersecurity Myths Banks Should Stop Believing

uncaptioned image
The belief among many senior execs that appointing a C-level exec to oversee a problem or challenge will take care of it or make it go away. If you need proof, consider how many companies now have a Chief analytics, AI, brand, customer, data, digital, experience, knowledge...you don't really want me to go on, do you...Officer. I'm all for a Chief Information Security Officer (CISO), but many business execs think that, by having one, that person (and IT) has the cybersecurity efforts under control. It doesn't work that way. The CISO of a $3 billion bank told me: I may be responsible for the security of the bank’s information, but it’s the executive team and functional heads who must ensure that we manage and mitigate the day-to-day operational risks of cybersecurity efficiently and effectively.” Data breaches and cyberattacks affect the entire enterprise, not just a single unit, division, or department. Decisions to mitigate these threats shouldn’t be relegated to IT. In addition, cyberincidents require communications with the institution’s customers, employees, partners, and media. The executive team and board should help script the organization's responses.


True Cybersecurity Means a Proactive Response

New cyber threats are emerging regularly and the solution to them lies in an aggressive, pre-emptive, proactive posture. Successful and secure organizations must begin to think this way if they want true data security. To do this, organizations must pivot in their security mindset and begin to implement solutions that take a comprehensive look and map all legitimate executions of an application based on the codes written by its creators, such as Microsoft and Adobe. With that map, they can identify any inconsistencies or deviation from their source code. Recognized patterns and actions can then be confirmed in real time, while unidentified activities are reviewed and blocked instantaneously. A proactive approach is a critical mindset change and an imperative if companies want to ensure they are in control of their network security. If organizations remain reactive, they will continue to consume valuable resources and risk their reputations as they chase after and remediate the mess left after the cyberattack has happened.


Migrating a Retail Monolith to Microservices

Architectural and organisational model
The autonomy principles include that a team can work and deploy independently; they should never have to wait for, or synchronize with another team. Implementation details should be hidden from other teams and failures isolated within services to make them resilient. The principles also state that for each data storage there must be exactly one service responsible. The first team rule concerning automation is that scaling must be horizontally and done automatically. Teams should also embrace a culture of automation, automation test, deploy and operations as much as possible. They are encouraged to deploy to production early and often, but also to be able to quickly rollback, in case of errors. To enable this, services must be highly observable. For all teams, communication is standardized and asynchronous where possible. For synchronous communication they use REST (maturity level 2, without hypermedia) and Kafka for asynchronous communication.


Performance-Based Routing (PBR) – The gold rush for SD-WAN

smart city iot digital transformation networking wireless city scape skyline
The inefficiency factor in the case of traditional routing is one of the main reasons why SD-WAN is really taking off. SD-WAN vendors are adding proprietary mechanisms to their routing in order to select the best path, not the shortest path. Originally, we didn't have real-time traffic, such as, voice and video, which is latency and jitter sensitive. Besides, we also assumed that all links were equal. But in today's world, we witness more of a mix and match, for example, 100Gig and slower long-term evolution (LTE) links. The assumption that the shortest path is the best no longer holds true. To overcome the drawbacks of traditional routing, we have had the onset of new protocols, such as, IPv6 segment routing and named data networking along with specific SD-WAN vendor mechanisms that improve routing. For optimum routing, effective packet steering is a must. And SD-WAN overlays provide this by utilizing encapsulation which could be a combination of GRE, UDP, Ethernet, MPLS, VxLAN and IPsec. IPv6 segment routing implements a stack of segments inserted in every packet and the named data networking can be distributed with routing protocols.


5 reasons CIO career paths go south -- and how to protect yourself


Commerce in the information age has introduced a multitude of regulations that can threaten a CIO career path. Whether it's the Sarbanes-Oxley Act, which ensures the accuracy of financial reporting, or GDPR, which protects consumer data, businesses face a plethora of regulatory requirements that inevitably require IT systems to manage. In some industries, the number and diversity of regulatory mandates has been known to cause compliance fatigue, where people start getting sloppy about compliance as the volume of requirements increases. Compliance failures can not only result in a CIO's dismissal, but they can also cause enterprise-threatening damage due to big fines, lawsuits and even criminal prosecution. Just as damaging are failures in governance, where there are no systems in place to track and enforce a company's internal policies. A perfect example is the public embarrassment Facebook had to deal with during the 2013 Cambridge Analytica scandal. 


Information Architects: Artificial Intelligence's Best Friend

future-2304558
Consider how behind the curtains of brilliant AI sit astounding designs that pave the way for instantaneous data retrieval. These pathways and storage units, though each initially the property of unique teams and business units, are integrated into a holistic framework by the efforts of Information Architects—the unsung heroes of AI—to create an enterprise-wide repository of knowledge to link departments and applications and just about anything else with clues into user behaviour. But no matter the data source, IAs must first groom input channels fed to AI systems in order to spotlight worthy patterns of interest. Everything is given an attribute and a value, and while not all data points will even contribute to an overall AI analysis, knowledge across an enterprise must nonetheless be put within accessible structures to help a system draw its own conclusions. IAs curate data according to real business needs to achieve specific, strategic solutions—and they use AI to adroitly connect the results of intelligence gathering.



Quote for the day:


"There are some among the so-called elite who are overbearing and arrogant. I want to foster leaders, not elitists." -- Daisaku Ikeda


Daily Tech Digest - April 07, 2019

Can you teach humor to an AI?


“Artificial intelligence will never get jokes like humans do,” states Kiki Hempelmann, a computational linguist who studies humor at Texas A&M University-Commerce. “In themselves, they have no need for humor. They miss completely context.” he adds. “Creative language — and humor in particular — is one of the hardest areas for computational intelligence to grasp,” Tristan Miller, computer scientist and linguist at Darmstadt University of Technology Tristan Miller, a computer scientist and linguist at Darmstadt University of Technology in Germany elaborates on the complexity for machines to process context: “Creative language — and humor in particular — is one of the hardest areas for computational intelligence to grasp,”. Miller has analyzed more than 10,000 word plays and found it quite challenging. “It’s because it relies so much on real-world knowledge — background knowledge and commonsense knowledge. A computer doesn’t have these real-world experiences to draw on. It only knows what you tell it and what it draws from.” he concludes.



Security flaws in banking apps expose data and source code


Exposed source code, sensitive data, access to backend services via APIs and more have been uncovered after a researcher downloaded various financial apps from the Google Play store and found that it took, on average, just eight and a half minutes before they were reading the code. Vulnerabilities including lack of binary protections, insecure data storage, unintended data leakage, weak encryption and more were found in banking, credit card and mobile payments apps and are detailed a report by cybersecurity company Arxan: In Plain Sight: The Vulnerability Epidemic in Financial Mobile Apps. "There's clearly a systemic issue here – it's not just one company, it's 30 companies and it's across multiple financial services verticals," Alissa Knight, cybersecurity analyst at global research and advisory firm Aite Group and the researcher behind the study, told ZDNet. The vast majority – 97 percent of the apps tested – were found to lack binary code protections, making it possible to reverse engineer or decompile the apps exposing source code to analysis and tampering.


Why blockchain (might be) coming to an IoT implementation near you

Chains of binary data.
Blockchain technology can be counter-intuitive to understand at a basic level, but it’s probably best thought of as a sort of distributed ledger keeping track of various transactions. Every “block” on the chain contains transactional records or other data to be secured against tampering, and is linked to the previous one by a cryptographic hash, which means that any tampering with the block will invalidate that connection. The nodes - which can be largely anything with a CPU in it - communicate via a decentralized, peer-to-peer network to share data and ensure the validity of the data in the chain. The system works because all the blocks have to agree with each other on the specifics of the data that they’re safeguarding, according to Nir Kshetri, a professor of management at the University of North Carolina. If someone attempts to alter a previous transaction on a given node, the rest of the data on the network pushes back. “The old record of the data is still there,” said Kshetri. That’s a powerful security technique - absent a bad actor successfully controlling all of the nodes on a given blockchain, the data protected by that blockchain can’t be falsified or otherwise fiddled with.


Researchers developed algorithms that mimic the human brain (and the results don’t suck)

Researchers developed algorithms that mimic the human brain (and the results don̢۪t suck)
Krotov and Hopfield’s work maintains the simplicity of the old school studies, but represents a novel step forward in brain-emulating neural networks. TNW spoke with Krotov who told us: If we talk about real neurobiology, there are many important details of how it works: complicated biophysical mechanisms of neurotransmitter dynamics at synaptic junctions, existence of more than one type of cells, details of spiking activities of those cells, etc. In our work, we ignore most of these details. Instead, we adopt one principle that is known to exist in the biological neural networks: the idea of locality. Neurons interact with each other only in pairs. In other words, our model is not an implementation of real biology, and in fact it is very far from the real biology, but rather it is a mathematical abstraction of biology to a single mathematical concept – locality. Modern deep learning methods often rely on a training technique called backpropagation, something that simply wouldn’t work in the human brain because it relies on non-local data.


Self-Service Delivery


Self-Service Delivery is an approach that makes the tools necessary to develop and deliver applications available via self-service. It makes the actions we need to take as developers — starting, developing and shipping software — available as user-accessible tools, so that we can work at our own speed without getting blocked. By making actions automated and accessible, it's easier to standardize configurations and practices across teams. We need specific building blocks to enable Self-Service Delivery. The same principle at the heart of your favorite framework applies to delivery. If we think of delivery phases in framework terms, each phase has a default implementation, which can be overridden. For example, if the convention is that Node projects in my team are built by running npm test, then I include a test script in my project. I don't write the code that runs the script, nor tell my build tool explicitly to do so. The same is true for other phases of delivery.


Artificial intelligence can now emulate human behaviors – soon it will be dangerously good

Robot
At the moment, there are enough potential errors in these technologies to give people a chance of detecting digital fabrications. Google's Bach composer made some mistakes an expert could detect. For example, when I tried it, the program allowed me to enter parallel fifths, a music interval that Bach studiously avoided. The app also broke musical rules of counterpoint by harmonizing melodies in the wrong key. Similarly, OpenAI's text-generating program occasionally wrote phrases like "fires happening under water" that made no sense in their contexts. As developers work on their creations, these mistakes will become rarer. Effectively, AI technologies will evolve and learn. The improved performance has the potential to bring many social benefits – including better health care, as AI programs help democratize the practice of medicine. Giving researchers and companies freedom to explore, in order to seek these positive achievements from AI systems, means opening up the risk of developing more advanced ways to create deception and other social problems. Severely limiting AI research could curb that progress.


The Race For Data And The Cybersecurity Challenges This Creates

uncaptioned image
High-tech today needs to be doing the exact same thing, with an emphasis on cybersecurity problems. Rather than sending devices and apps into the connected ecosystem willy-nilly, we need to fully understand what could happen when we do. How many people could be impacted? How many companies? What are the financial losses that could be sustained? What about losses to brand/image? In other words: do we really understand the implications of what we are creating here? These questions, if well researched, should be enough to slow down time-to-market and eventually stop breaking so many things. This should be performed both at the development stage in every company and the adoption stage. Companies creating products have a responsibility to their customers to ensure safety and they can’t do that if they don’t fully take everything into account. On the other end of the spectrum, CIOs, CTOs, and anyone responsible for buying and adopting new tech in your business needs to perform the same sort of analysis. Don’t just buy tech for tech’s sake.


Serverless computing growth softens, at least for now

Plans or intentions for serverless implementations have slipped as well, the Cloud Foundry survey also shows. Currently, 36 percent report evaluating serverless, compared to 42 percent in the previous survey.  Some of this may be attributable to the statistical aberrations that occur within surveys that are conducted within months of one another -- don't be surprised if the numbers pop again in the fall survey. Diving deeper into the adoption and planned adoption numbers, the survey's authors point out that within organizations embracing serverless architecture, usage is actually proliferating. For users and evaluators, 18 percent say they are broadly deploying serverless across their entire company, double the percentage (9 percent) who said that only one year ago.  Still, it is telling that there is some degree of caution being exercised when moving to serverless architecture. What's behind the caution?


Vulnerability Management: 3 Questions That Will Help Prioritize Patching


There is usually a significant delta between intended network segmentation and access rights, and what actually exists. Credentials and connections that introduce risk get set up in a variety of ways. We call this actual connectivity the “access footprint.” Throughout the normal work day, users connect and disconnect from various systems and applications, leaving behind cached credentials and potential “live” connections. The access footprint changes constantly. Some risky conditions are fleeting; others can persist for a very long time. But even if these conditions are short-lived, an attacker situated in the right place at the right time (“right” for them, wrong for you!) has plenty to work with. A new report published by CrowdStrike underscores the importance of proactively hardening the network against lateral movement. It’s a vitally important complement to traditional vulnerability management.


The Difference Between Microservices and Web Services


Microservices architecture involves breaking down a software application into its smaller components, rather than just having one large software application. Typically, this involves splitting up a software application into smaller, distinct business capabilities. These can then talk to each other via an interface. ... So, if microservices are like mini-applications that can talk to each other, then what are web services? Well, they are also mini-applications that can talk to each other, but over a network, in a defined format. They allow one piece of software to get input from another piece of software, or provide output, over a network. This is performed via a defined interface and language, such as XML. If you’re running on a network where your software components or services won’t be co-located, or you want the option of running them in separate locations in the future then you will likely need to use web services in some form.



Quote for the day:


"No amount of learning can cure stupidity and formal education positively fortifies it." -- Stephen Vizinczey


Daily Tech Digest - April 06, 2019

Artificial intelligence, machine learning and intelligence


Besides processing information in the “classic” way, quantum computers use two specific characteristics of the quantum system, i.e. overlapping – where two or more quantum states can be added together – and entanglement that implies, in a counter-intuitive way, the presence of many remote correlations among all the physical quantum states examined. Hence an availability of data and calculation speeds, enabling to carry out previously unimaginable operations: the analysis of continental climate change; the world economic cycles of raw materials; the number and physical constants of galaxies in space. In the future, there will also be convergence between AI and the Internet of Things, which will make both the construction of vehicles and their driving autonomous. Another short-term integration will be between blockchain technology and Artificial Intelligence. We have often spoken about blockchain, but in this case it is above all the integration between the blockchain “closed” network and a selective data collection or, otherwise, a patented and still secret technology.



Continuous Delivery Foundation seeks smoother CI/CD paths


One reason is enterprises must pick from a large menu of often fragmented tools in the CI/CD market, and then integrate the various tools into their CI/CD pipelines. Among the many tools in the CI/CD landscape are Shippable, CloudBees Jenkins, Atlassian's Bamboo, Bitnami, CircleCI, Travis CI, JetBrains' TeamCity and Microsoft's Azure DevOps Server. Nearly every company also creates software to automate its business processes, so CI/CD tools are in higher demand than ever. Despite some consolidation in the DevOps arena -- JFrog recently acquired Shippable, and CloudBees snapped up Codeship -- enterprises do face a choice: They must integrate several different tools to build their pipelines, or lock into an end-to-end DevOps tools environment with one of the major cloud providers. To help simplify the process, the Linux Foundation formed the Continuous Delivery Foundation (CDF) in mid-March. Among the CDF's founding members, which span open source software, platforms and tools, are the following: Alibaba, Autodesk, Capital One, CircleCI, CloudBees, GitLab, Google, Huawei, IBM, JFrog, Netflix, Puppet, Red Hat and SAP.



The Best Decision: Your Future and Serverless Stream Processing

A streaming data processing structure usually comprises of two layers—a storage layer and a processing layer. The former is responsible for ordering large streams of records and facilitating persistence and accessibility at high speeds. The processing layer takes care of data consumption, executing computations, and notifying the storage layer to get rid of already-processed records. Data processing is done for each record incrementally or by matching over sliding time windows. Processed data is then, subjected to streaming analytics operations and the derived information is used to make context-based decisions. For instance, companies can track public sentiment changes on the products by analyzing social media streams continuously—world's most influential nations can intervene in decisive events like presidential elections in other powerful countries—mobile apps can offer personalized recommendations for products based on geo-location of devices, user emotions.


10 Interesting Facts About Chatbots


People don’t really care if your chatbot has a great personality. Especially if that chatbot can’t solve an issue that one of your customers is currently experiencing. Make sure you focus on utility over personality. Forty-eight percent of respondents in the same LivePerson survey said they prefer chatbots that can solve problems. However, don’t forget about speed! Consumers value friendliness and ease of use the most in chatbots, but speed is a close third, according to Aspect’s research. Speed is more important to consumers than having a successful interaction and even accuracy. ... Facebook has evolved a ton over the years. It’s no longer just a place to keep in touch with friends from school and spy on your ex. Now it’s also a place to buy things. In fact, a new model is taking shape — one where people don’t have to click on a link, leave Facebook to visit a traditional company website, add stuff to a shopping cart, and complete a purchase. That’s because 37 percent of people are open to the idea of buying items on the social network, according to the same HubSpot research. And you can be sure that number will continue to grow as more people are exposed to and adopt chatbots.


5G & Industry 4.0 at Hannover Messe 2019

Woman wearing AR lenses interacting with a robotic arm.
Ericsson sees mobile technology as a new foundation to accelerate and support these new technologies. If factories are to enable digital twins of all processes and workflows, reliable wireless capabilities and low latency are a necessity. With 5G, digital twins can be accessed through remote VR monitoring, supporting transparency in factories. For example, one of our interactive proof points is a virtual tour of FCA Mirafiori plant in Torino where the visitor can “move” within it, monitoring key processes for bottlenecks and machinery parameters like vibration and temperature. We also address the challenges of companies with distributed production sites. A common problem is that similar processes perform differently at different locations. To solve it, plants must break down siloes and introduce transparency to optimize and align these processes. With Fraunhofer IPT, Ericsson presents the 5G Production Cockpit, giving a real-time view of processes in Aachen as well as Stockholm, transmitting live data, creating digital twins. With centralized data and analytics, current as well as historical data are compared for deeper insights.


Form a hybrid integration plan for your architecture


The first challenge is to figure out exactly what your requirements are as an architect. You can have a narrow perspective and focus on hybrid integration in the context of a particular project or initiative, or you can have a holistic perspective. And if you have a holistic perspective, it's hard work to figure out exactly what your integration requirements are today and what they will be in the next, let's say, three to five years, because of all these things happening. The second is selecting the appropriate combination of technologies. Architects would love to have one single [hybrid integration] platform that can cover them all, which can connect IoT devices, mobile devices, APIs, cloud, etc. This is difficult. In the market, there are many [hybrid integration] products, but few are good at supporting all these different scenarios. So, identify what is the right combination of technologies that can be used to solve the problem. ... Sometimes, you cannot put the same platform in the three environments. Maybe on-premises, you have more demanding requirements than in the cloud.


Domain-Oriented Observability

"Observability" has a broad scope, from low-level technical metrics through to high-level business key performance indicators (KPIs). On the technical end of the spectrum, we can track things like memory and CPU utilization, network and disk I/O, thread counts, and garbage collection (GC) pauses. On the other end of the spectrum, our business/domain metrics might track things like cart abandonment rate, session duration, or payment failure rate. Because these higher-level metrics are specific to each system, they usually require hand-rolled instrumentation logic. This is in contrast to lower-level technical instrumentation, which is more generic and often is achieved without much modification to a system's codebase beyond perhaps injecting some sort of monitoring agent at boot time. It's also important to note that higher-level, product-oriented metrics are more valuable because, by definition, they more closely reflect that the system is performing toward its intended business goals. By adding instrumentation that tracks these valuable metrics we achieve Domain-Oriented Observability.


Why Cybersecurity Matters: A Lawyer’s Toolkit


The stark reality is that most attorneys are highly independent and singularly focused on servicing their clients, whether as in-house or outside counsel. The extra steps required to access files and applications with oft hard-to-remember (but more secure) passwords are not always congruous with billable hours and around-the-clock attention to deliverables. Lawyers may compromise on security to ensure direct communications with clients on their platform of choice, in pursuit of the almighty billable hour. Another vulnerability is that attorneys crave information, the more the better. This trait is something that savvy hackers understand and will use to their advantage. Email phishing, in that regard, is a frequent tactic. The smart cyber-villain will quickly learn how to dupe attorneys and their assistants by sending attachments and links by email that appear to come from a legitimate source. Once said attachment is opened — bingo! — the malware starts to execute and do the dirty work behind the scenes, scouring the device for desired data points and eventually securing access to an internal network.


Does IT need Devops Managers?


If you read Agile literature, you’ll realize that the reason Agile in general and Scrum in particular promotes the role of a Scrum “Master’ rather than a Manager is because the latter often oversteps his authority and mandate. The effect on the experts is disastrous. They feel ‘controlled’ with no motivation left to appreciate the overall objective of the venture they’re part of and confine themselves just to do what they’re asked to do. This is the beginning of the ‘silo’ mentality — the very mentality Devops is supposed to eliminate. If you look deeper to understand the silo between Dev and Ops, you’ll observe that the silo gets deeper as you go lower in the hierarchy — it’s not so deep at the Dev and Ops management layer. So, the point is that IT Management need to look themselves in the mirror and assess the degree to which they have contributed to this chasm between Dev and Ops. They need to step back from their command and control approach to a far more ‘watch from a distance and protect’ approach. They need to empower the ‘experts’, allow them to mingle, interact and collaborate, show them the big picture, assert confidence in them to solve the big problems and create a win-win platform.


When should I choose between serverless and microservices?


Microservices are best suited for long-running, complex applications that have significant resource and management requirements. You can migrate an existing monolithic application to microservices, which makes it easier to modularly develop features for the application and deploy it in the cloud. Microservices are also a good choice for building e-commerce sites, as they can retain information throughout a transaction and meet the needs of a 24/7 customer base. On the other hand, serverless functions only execute when needed. Once the execution is over, the computing instance that runs the code decommissions itself. Serverless aligns with applications that are event driven, especially when the events are sporadic and the event processing is not resource-intensive. Serverless is a good choice when developers need to deploy fast and there are minimal application scaling concerns. ... As a rule of thumb, choose serverless computing when you need automatic scaling and lower runtime costs, and choose microservices when you need flexibility and want to migrate a legacy application to a modern architecture.



Quote for the day:


"Learn from the mistakes of others. You can never live long enough to make them all yourself." -- Groucho Marx