Daily Tech Digest - April 21, 2019

Blockchain: The Ultimate Disruptor?


What the internet did for the exchange of information, blockchain has the potential to do for the exchange of a digital asset’s value. Right now, many people in the blockchain space are talking about “tokenization,” which breaks down the ownership of an asset into digital tokens to allow wider-scale ownership of that asset. Tokenization started with initial coin offerings (ICOs) and has evolved to securitized token offerings (STOs), which have the potential to unlock the value of trillions of dollars of assets that are currently closed to the average person and make them more accessible. We’re talking about real-estate holdings, private equity, etc. When these assets are tokenized and brought into the market, it could impact the average person and how they do their financial and retirement planning, as well as where and what they choose to invest in. ... Rafia says about the wider access potential, “Currently, private equity, venture capital and other similar investments are not available to retail investors because there are a lot of regulations preventing it, as they tend to be riskier asset classes


Key in changing your enterprise is analyzing the impact of changes and planning those changes in a smart way. We do not advocate a ‘big up-front design’ approach, with huge, rigid multi-year transformation plans. Rather, in an increasingly volatile business world you need to use an interactive approach where your plans are updated regularly to match changing circumstances, typically in an agile manner. The figure below shows a simple example of dependencies between a series of changes, depicted with the pink boxes. A delay in ‘P428’ causes problems in the schedule, since ‘P472’ depends on it. Moreover, since the two changes overlap in scope (shown in the right-hand table), they could potentially be in each other’s way when they also overlap in time. This information is calculated from the combination of project schedule and architecture information, a clear example of the value of integrating this kind of structure and data in a Digital Twin.


How People Are the Real Key to Digital Transformation

An interview with Gerald C. Kane, Anh Nguyen Phillips, Jonathan R. Copulsky, and Garth R. Andrus, the authors of "The Technology Fallacy."
Digital disruption affects all levels of the organization. Our research shows, however, that higher level leaders are generally much more optimistic about how their organization is adapting to that disruption than lower level employees. This result suggests that leaders may be overestimating how well their organization is responding. In the book, we provide a framework by which leaders can survey their employees to gauge how digitally mature their organization is against 23 traits, which we refer to as the organization’s digital DNA. Digital maturity is usually unevenly distributed throughout an organization, and we encourage organizations to use this framework to assess how it is distributed so they can begin to identify and address the areas of improvement that are most likely to yield organizational benefits. ... a single set of organizational characteristics were essential for digital maturity -- accepting risk of failure as a natural part of experimenting with new initiatives, actively implementing initiatives to increase agility, valuing and encouraging experimentation and testing as a means of continuous organizational learning, recognizing and rewarding collaboration across teams and divisions, increasingly organizing around cross-functional project teams, and empowering those teams to act autonomously.


Cachalot DB as a Distributed Cache with Unique Features

The most frequent use-case for a distributed cache is to store objects identified by one or more unique keys. A database contains the persistent data and, when an object is accessed, we first try to get it from the cache and, if not available, load it from the database. Most usually, if the object is loaded from the database, it is also stored in the cache for later use. ... By using this simple algorithm, the cache is progressively filled with data and its “hit ratio” improves over time. This cache usage is usually associated with an “eviction policy” to avoid excessive memory consumption. When a threshold is reached (either in terms of memory usage or object count), some of the objects from the cache are removed. The most frequently used eviction policy is “Least Recently Used” abbreviated LRU. In this case, every time an object is accessed in the cache, its associated timestamp is updated. When eviction is triggered, we remove the objects with the oldest timestamp. Using cachalot as a distributed cache of this type is very easy.


Enterprise Architecture: A Blueprint for Digital Transformation


Enterprise architects have a tough job. They have to think strategically but act tactically. A successful enterprise architect can sit down at the boardroom table and discuss where the business needs to go, then translate “business speak” into technical capabilities on the back end. The key to EA is to always focus on business needs first, then how those needs can be met by applying technology. It comes down to the concept of IQ (intelligence quotient or “raw intelligence”) and EQ (emotional quotient, or “emotional intelligence”). As a recent Forbes article stated, “when it comes to success in business, EQ eats IQ for breakfast.” Good enterprise architects need to have good IQ and EQ. This balance prevents pursuing the latest technology just because it’s cool but instead determining what’s the best way to meet the business need. At the end of the day, an EA should be measured by the business outcomes it’s delivering. Our approach to EA (see below) starts with a business outcome statement, and ends with governance processes to verify we’re achieving those business outcomes and adhering to the EA blueprint.


Crisis Resilience for the Board


Similar to culture oversight, boards are increasingly monitoring company technology activities, from cyber risk to disruption risk to digital transformation. Directors are asking management tough questions about technologies that are vital to the business and whether they are truly protected from the most likely and impactful risks. Beyond protecting data, the board should understand whether management is incorporating resilience into their information technology and cybersecurity strategies. To do so, directors may seek to understand how the most critical data—or that which is most vital to the business’s success—is backed up and protected, both physically and logically. Directors should understand, at a high level, what the most critical data asset sets or capabilities are to the company and the risks posed to them. Additionally, directors should ask management whether it is considering innovative technologies to both protect assets and enable quick recovery in the event of potential loss. ... Directors might also endeavor to learn about leading practices around risk management, crisis management, cyber risk, physical security, succession planning, and culture risk. This could provide a level of comfort with the risks posed to the company, as well as a degree of confidence in the company’s ability to respond.


The Cybersecurity 202: This is the biggest problem with cybersecurity research


“There are a whole lot of possible barriers that will come to the fore if an organization asks their lawyers about it,” Moore said. “It turns out that many of those risks, on deeper inspection, can be mitigated and overcome. But there has to be institutional will to do it.” One irony of this problem is that the cybersecurity community has been hyper-focused on information sharing in recent years — but the focus has been on companies sharing hacking threats from the past day or two so they can guard against them. The government has championed these threat-sharing operations and facilitates them through a set of organizations called information sharing and analysis centers and information sharing and analysis organizations. That sort of sharing has a clear benefit for companies because it helps them defend against threats that may be coming in the next hour or day. But companies have made less progress on sharing longer-range cybersecurity information that can help address more fundamental cybersecurity challenges, Moore said.


The Connection Between Strategy And Enterprise Architecture (Part 3)

Business capabilities connect the business model with the enterprise architecture, which is composed of the organizational structure, processes, and resources that execute the business model. It is a combination of resources, processes, values, technology solutions, and other assets that are used to implement the strategy. ... business capabilities comprise a fundamental building block that enables and supports the business transformation initiatives companies are undertaking to remain relevant in the constantly changing marketplace. Companies that excel in mapping their existing capabilities and creating a road map to close the gap in their future capabilities are most likely to remain ahead of the competition by responding effectively to industry and market dynamics. Therefore, the way we connect the company’s high-level strategic priorities and objectives to the resources, processes, and ultimately the system landscape that execute the strategy is by mapping and modeling the necessary capabilities.


Leading Innovation = Managing Uncertainty

McKinsey Quarterly (2019) - Three Horizons Framework
Uncertainty is the central characteristic of innovation. While generating new ideas and inventing new technologies is important, it is even more important for innovators to identify the unknowns that have to be true for their ideas and technologies to succeed in the market. We can only claim to have succeeded at innovation when we find the right business model to profitably take our idea or technology to market. At the strategy level, there are several frameworks that have been developed to help leaders understand their product and service portfolios and make decisions. These frameworks use different dimensions that hide in plain sight, the real challenge that leaders are facing - i.e. managing uncertainty. ... The McKinsey framework is perhaps the most popular of them all. This framework maps two dimensions of value and time to create three horizons. The nearest horizon is Horizon 1 where we extend the core and generate value for the company straight away. In Horizon 2, we build businesses around new opportunities with potential to impact revenues in the near term. The farthest horizon is Horizon 3 where visionaries work on viable options that will only deliver value to the company after several years.


Cloud Security Architectures: Lifting the Fog from the Cloud

The user behavior analytics (UBA) security solutions oriented primarily to the insider threat have matured and are commonplace. ... This data is crucial for forensics analyses when a major breach has been detected. There isn’t a comparable mature technology yet for the cloud where users have migrated their work. The successful UBA technology teaches security professionals how to properly architect new enterprise systems with users’ cloud behaviors in full view. The core approach for the cloud is to gather data from cloud storage logs, extract features and carefully architect sets of indicators that detect likely breaches. Cloud logs can reveal, for example, when file extensions are changed, what documents are downloaded and to where, whether a document has been downloaded to an anonymous user, and when an unusually high number of documents are downloaded to an odd geolocation. These are all early indicators of potential breach activity.



Quote for the day:


"Leaders need to be optimists. Their vision is beyond the present." -- Rudy Giuliani


Daily Tech Digest - April 20, 2019

How to reconstruct your business’s value chain for the digital world

How to reconstruct your business̢۪s value chain for the digital world
What’s the big advantage of digital? It allows you to disconnect yourself from physical constraints. With uber, you no longer have to be in the street to hail a cab. You can order a cab from anywhere. If you digitize the supply-chain process, you are no longer linking the production of the product to one physical location. In the analog world, a person would check the inventory and write an order for supplies. When there was a spike in demand, that person would call more people and write more orders for more supplies. But in the digital world, you can create a manufacturing process where your inventory, recipes, and prices are all available on a digitized, harmonized ecosystem. When demand spikes, you can turn the dial on your [robotic process automation] RPA tool. When we digitize and harmonize complex business processes, we no longer have to call a guy who orders a part. Instead, you have a view into the inventory across multiple suppliers. The CIO has a unique and critical role in digital transformation, as long as they don’t fall into a few common traps. One such trap is when the CEO throws money at you and tells you, “Bring me this shiny new technology.”


Why Enterprise-Grade Cybersecurity Needs a Federated Architecture

A federated architecture combines the strengths of centralized and distributed and is, therefore, a kind of “best of both worlds” approach. With federated, a controller is placed in each data center or public cloud region (just like distributed), but those multiple controllers act in concert so as to provide the abstraction that there is one centralized controller. All of the controllers in a federated architecture communicate with each other to share information about the organization’s security policy as well as the workloads that are being secured. This type of architecture is the best when it comes to securing global infrastructure at scale. And, as is typically the case when writing enterprise-grade software, making the right architectural choice and then implementing it in an elegant way required our architects and engineers to spend a little more time and be a little more thoughtful. Our ultimate goal was to deliver an enterprise-scale architecture that delivered the benefits of a federated architecture without the downsides of distributed and centralized.


Ready for 6G? How AI will shape the network of the future


Take the problem of coordinating self-driving vehicles through a major city. That’s a significant challenge, given that some 2.7 million vehicles enter a city like New York every day. The self-driving vehicles of the future will need to be aware of their location, their environment and how it is changing, and other road users such as cyclists, pedestrians, and other self-driving vehicles. They will need to negotiate passage through junctions and optimize their route in a way that minimizes journey times. That’s a significant computational challenge. It will require cars to rapidly create on-the-fly networks, for example, as they approach a specific junction—and then abandon them almost instantly. At the same time, they will be part of broader networks calculating routes and journey times and so on. “Interactions will therefore be necessary in vast amounts, to solve large distributed problems where massive connectivity, large data volumes and ultra low-latency beyond those to be offered by 5G networks will be essential,” say Stoica and Abreu.


IT Governance 101: IT Governance for Dummies, Part 2

One of the powerful aspects of COBIT is that it acts as the glue between governance and management, describing both governance and management processes. Its concept of cascading enterprise goals to IT goals to enabler goals and metrics ensures consistent communication and alignment. These enablers such as Processes are where all the IT management frameworks can be plugged in, helping to give the frameworks a business context and ensuring that they focus on delivering value and outcomes, not just outputs. As stated by one expert in the UAE, “I think often because organizations do not do a goals cascade things feel disconnected and orphaned, but once you do a proper goals cascade you can see and feel the interconnection and how goals are interdependent on each other to achieve the enterprise-level goals. ... Clearly, these exploding business demands for new benefits exist and, at the same time, IT is expected to make everything secure, replace all that legacy stuff that is slowing down the Ubering, and stop IT from breaking as well.


Some internet outages predicted for the coming month as '768k Day' approaches

World map globe cyber internet
The good news is that network admins have known about 768k Day for a long time, and many have already prepared, either by replacing old routers with new gear or by making firmware tweaks to allow devices to handle global BGP routing tables that exceed even 768,000 routes. "Yes, TCAM memory settings can be adjusted to help mitigate, and even go beyond 768k routes on some platforms, which will work if you don't run IPv6. These setting changes require a reboot to take effect," Troutman said. "The 768k IPv4 route limit is only a problem if you are taking ALL routes. If you discard or don't accept /24 routes, that eliminates half the total BGP table size. "The organizations that are running older equipment should know this already, and have the configurations in place to limit installed prefixes. It is not difficult," Troutman added. "I have a telco ILEC client that is still running their network quite nicely on old Cisco 6509 SUP-720 gear, and I am familiar with others, too," he said.


Bots Are Coming! Approaches for Testing Conversational Interfaces

When testing such interfaces, natural language is the input and we humans really love having alternatives and love our synonyms and our expressions. Testing in this context moves from pure logic to something close to fuzzy logic and clouds of probabilities. As they are intended to provide a natural interaction, testing conversational interfaces also requires a great deal of empathy and understanding of the human society and ways of interacting. In this area, I would include cultural aspects, including paraverbal aspects of speech (that is all communication happening beside the spoken message, encoded in voice modulation and level). These elements provide an additional level of complexity and many times the person doing the testing work needs to consider such aspects. I believe it’s fair to say that testing a conversational interface can be also be seen as tuning, so that it passes a Turing test. Another challenge faced when testing such interfaces is the distributed architecture of systems.


Protecting smart cities and smart people

spinning globe smart city iot skyscrapers city scape internet digital transformation
For as long as most can remember, information security was a technology concern, handled by technologists, and discussed by security engineers and associated professionals. The security vendors presented at security conferences, the security professional attended accordingly, Cat people with cat people. You know how it goes. Within a Smart city eco- system, we need to extend the cyber conversation beyond the traditional players. How do we make the City Planner appreciate what we understand? How do we share and apply security best practices to an engineering company providing a Building Information Modelling (BIM) service to a Hospital or Defence project? Moreover, how do we, in the first instance highlight the security concerns? Attending and speaking at numerous cyber conferences I sometimes wonder, is this the right audience? In this digital eco-system, we should be speaking to civic and government leaders about our security concerns facing smart cities and critical infrastructure, not exclusively to other security professionals. They are well aware of the challenges and the resistance experienced.


Don't underestimate the power of the fintech revolution

According to Bank of England Governor Mark Carney, FinTech’s potential is to unbundle banking into its core functions - such as settling payments and allocating capital. For central bankers and regulators who are monitoring the sector, the growth of fintech is akin to any other disruptive technology - that is, will it lead to financial instability? Most fintech start-ups are not regulated as much as traditional financial institutions. So far, it’s the more open financial markets that have seen fintech develop rapidly. One example is the e-payment system M-Pesa, which operates in Kenya, Tanzania and elsewhere, and is one of the biggest fintech success stories since its emergence just a decade ago. By effectively transforming mobile phones into payment accounts, M-Pesa has increased financial access for previously unbanked people. The permissive stance of the Kenyan central bank allowed the sector to develop rapidly in one of East Africa’s most developed economies.


Data Breaches in Healthcare Affect More Than Patient Data

Data Breaches in Healthcare Affect More Than Patient Data
Cybercriminals go after any data they perceive to be valuable, says Rebecca Herold, president of Simbus, a privacy and cloud security services firm, and CEO of The Privacy Professor consultancy. "Payroll data contains a wide range of really valuable data that cybercrooks can sell to other crooks for high amounts," she says. "With the growing number of pathways into healthcare systems and networks ... that are being established through employee-owned devices, through third parties/BAs, and through IoT devices, I believe that such fraud is increasing because of the many more opportunities that crooks have now to commit these types of crimes." The recent attacks on Blue Cross of Idaho and Palmetto Health spotlight the importance for healthcare entities to diligently safeguard all data, says former healthcare CISO Mark Johnson of the consultancy LBMC Information Security. The attacks "underscore for me that the healthcare industry needs to protect the entire environment, not just their large systems like the EMR," he says.


Why Your DevOps Is Not Effective: Common Conflicts in the Team

In the DNA of DevOps culture lies the principle of constant and continuous interaction as well as collaboration between different people and departments. The key reason for this is a much greater final efficiency and a much smaller time-to-market compared to the traditional approach. Proper implementation of DevOps shifts the focus from personal effectiveness to team efficiency. At the same time, due to automation and the widespread introduction of monitoring and testing, it is possible to track the occurrence of a problem at the early stages, as well as quickly find the causes of problems. Building the right culture in the organization is important, and it does not depend on DevOps directly: problems occur in all companies, but in an organization, with the right culture all the forces will be thrown at solving the problem and preventing it in the future, rather than searching for the guilty side and punishing.



Quote for the day:


"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor


Daily Tech Digest - April 18, 2019

Automation is a machine and a machine only does what it is told to do. Complicated tests require a lot of preparation and planning and also have certain boundaries. The script then follows the protocol and tests the application accordingly, Ad-hoc testing helps testers to answer questions like, “What happens when I follow X instead of Y?” It helps the tester to think and test using an out of the box approach, which is difficult to program in an automation script. Even visual cross-browser testing needs a manual approach. Instead of depending on an automated script to find out the visual differences, you can check for the issues manually either by testing on real browsers and devices or even better, by using cloud-based, cross-browser testing tools, which allow you to test your website seamlessly across thousands of different browser-device-operating system combinations. ... Having a manual touch throughout the testing procedure instead of depending entirely on automation will ensure that there are no false positives or false negatives as test results after a script is executed.


Understanding the key role of ethics in artificial intelligence

It has become faddish to talk about the important of ethical AI and the need for oversight, transparency, guidelines, diversity, etc., at an abstract and high-level. This is not a bad thing, but often assumes that such ‘talk’ is concomitant in addressing the challenges of ethical AI. The facts, however, are much more complex. For example, guidelines themselves are often ineffective (a recent study showed the ACM’s code of ethics had little effect on the decision making process of engineers). Moreover, even if we agree on how an AI system should behave (not trivial) implementing specific behavior in the context of the complex machinery that underpins AI is extremely challenging. ... Ethics in AI is extremely important given the proliferation of AI systems in consequential areas of our lives: college admissions, financial decision-making systems, and what the news we consume on Facebook and other media sites.


Researchers: Malware Can Be Hidden in Medical Images
The "flaw" discovered in the DICOM file format specification could allow attackers to embed executable code within DICOM files to create a hybrid file that is both a fully functioning Windows executable as well as a specification-compliant DICOM image that can be opened and viewed with any DICOM viewer, the report says. "Such files can function as a typical Windows PE file while maintaining adherence to the DICOM standard and preserving the integrity of the patient information contained within," according to the report. "We've dubbed such files, which intertwine executable malware with patient information, PE/DICOM files." By exploiting this design flaw, the report says, attackers could "take advantage of the abundance and centralization of DICOM imagery within healthcare organizations to increase stealth and more easily distribute their malware, setting the stage for potential evasion techniques and multistage attacks." The fusion of fully functioning executable malware with HIPAA protected patient information adds regulatory complexities and clinical implications to automated malware protection and typical incident response processes, the researchers say.


Sometimes, rather than look at problem areas in the business, he says the team focuses on exploring pure technology. As an example, Chatrain says Generative Adversarial Networks (GANs) can benefit from algorithms that generate fake data, such as fake pictures of people who do not actually exist. “We dedicate part of our exploratory time to such techniques and technologies and then look for applications,” he says. Looking at a practical example of how a fake data algorithm could be deployed, he says: “With GDPR and the need to feed test systems with high volumes of realistic data, we used [synthetic data algorithms] to create fake travellers with travel itineraries.” Such synthetic data is indistinguishable from the data that represents the travel plans of real people, and this data can be used to test the robustness of systems at Amadeus. “Today, no one tests the systems if we have twice as much data,” says Chatrain. But this is possible if data for a vast increase in passenger numbers is simply generated via a synthetic data algorithm. Beyond being used to test application software, he says synthetic data also enables Amadeus to anonymise the data it shares with third parties. “We are not allowed to share [personal] data, but we still need a business partnership.”


What is project portfolio management? Aligning projects to business goals

What is project portfolio management? Aligning projects to business goals
With PPM, not only are project, program, and portfolio professionals able to execute at a detailed level, but they are also able to understand and visualize how project, program, and portfolio management ties to an organization’s vision and mission. PPM fosters big-picture thinking by linking each project milestone and task back to the broader goals of the organization. ... Capacity planning and effectively managing resources is largely dependent on how well your PMO executes its strategy and links the use of resources to company-wide goals. It is no secret that wasted resources is one of the biggest issues that companies encounter when it comes to scope creep. PPM decreases the chances of wasted resources by ensuring resources are allocated based on priority and are being effectively sequenced and wisely leveraged to meet intended goals. ... PMOs that communicate to project teams and other stakeholders, such as employees, why and how project tasks are vital in creating value increase the likelihood of a higher degree of productivity. 


Startup MemVerge combines DRAM and Optane into massive memory pool
Optane memory is designed to sit between high-speed memory and solid-state drives (SSDs) and acts as a cache for the SSD, since it has speed comparable to DRAM but SSD persistence. With Intel’s new Xeon Scalable processors, this can make up to 4.5TB of memory available to a processor. Optane runs in one of two modes: Memory Mode and App Direct Mode. In Memory Mode, the Optane memory functions like regular memory and is not persistent. In App Direct Mode, it functions as the SSD cache but apps don’t natively support it. They need to be tweaked to function properly in Optane memory. As it was explained to me, apps aren’t designed for persistent storage because the data is already in memory on powerup rather than having to load it from storage. So, the app has to know memory doesn’t go away and that it does not need to shuffle data back and forth between storage and memory. Therefore, apps natively don’t work in persistent memory.


crypto currency circuit nodes digital wallet bitcoin blockchain
The group hopes to turn out the first iteration of its Token Taxonomy Framework (TTF) later this year; afterward it plans work to educate the blockchain community and collaborate through structured Token Definition Workshops (TDW) to define new or existing tokens. Once defined, the taxonomy can be used by businesses as a baseline to create blockchain-based applications using digital representations of everything from supply chain goods to non-fungible items such as invoices. "We'll do some workshops...to validate and make sure we have the base definition of a non-fungable token," said Marley Gray, Microsoft's principal architect for Azure blockchain engineering and a member of the EEA's Board of Directors. "As we go through workshops, we will probably find we should add this attribute or this clarification or this example that helps someone understand it." The organizations that have agreed to participate in the standardization effort include Accenture, Banco Santander, Blockchain Research Institute, BNY Mellon, Clearmatics, ConsenSys, Digital Asset, EY, IBM, ING, Intel, J.P. Morgan, Komgo, R3, and Web3 Labs.



Each micro-component runs an independent processing flow that performs a single task. For example, if your application has a network layer, you may also have Network Receiver and Network Sender components which only have the responsibility for receiving/sending data through the network. If your application has a logging layer it might also be implemented as an independent micro-component. Each micro-component defines its own interface of outgoing/incoming events, and the internal processing flow for them. For example, the Network Receiver might define the OutgoingClientRequests channel, which would be populated with newly received requests from the users. Interfaces, as you might guess, are implemented on top of channels, so the communication flows look very obvious, predictable, and easily maintainable in this perspective. The core’s role is to connect various outgoing channels with various incoming channels and to enable data flow between various micro-components.


Cisco Talos details exceptionally dangerous DNS hijacking attack

man in boat surrounded by sharks risk fear decision attack threat by peshkova getty
Talos noted “with high confidence” that these operations are distinctly different and independent from the operations performed by DNSpionage. In that report, Talos said a DNSpionage campaign utilized two fake, malicious websites containing job postings that were used to compromise targets via malicious Microsoft Office documents with embedded macros. The malware supported HTTP and DNS communication with the attackers. In a separate DNSpionage campaign, the attackers used the same IP address to redirect the DNS of legitimate .gov and private company domains. During each DNS compromise, the actor carefully generated Let's Encrypt certificates for the redirected domains. These certificates provide X.509 certificates for Transport Layer Security (TLS) free of charge to the user, Talos said. The Sea Turtle campaign gained initial access either by exploiting known vulnerabilities or by sending spear-phishing emails. Talos said it believes the attackers have exploited multiple known common vulnerabilities and exposures (CVEs) to either gain initial access or to move laterally within an affected organization.


Wipro Detects Phishing Attack: Investigation in Progress

Wipro Detects Phishing Attack:  Investigation in Progress
Wipro's systems were seen being used as jumping-off points for digital phishing expeditions targeting at least a dozen Wipro customer systems, the blog says. "Wipro's customers traced malicious and suspicious network reconnaissance activity back to partner systems that were communicating directly with Wipro's network," according to the blog. In a statement, Wipro says: "Upon learning of the incident, we promptly began an investigation, identified the affected users and took remedial steps to contain and mitigate any potential impact." The firm tells ISMG that none of its customers' credentials have been affected, as was alleged in the blog. Some security experts, however, say Wipro may be the victim of a nation-state sponsored attack. "It is most likely by a nation-state. They use this modus operandi to breach a vendor network first and through that route the attack their customers," says a Bangalore-based security expert, who did not wish to be named. "That is because customers will consider Wipro's network safe.



Quote for the day:


"A good leader leads the people from above them. A great leader leads the people from within them." -- M.D. Arnold


Daily Tech Digest - April 17, 2019

What SDN is and where it’s going

What SDN is and where it̢۪s going
The driving ideas behind the development of SDN are myriad. For example, it promises to reduce the complexity of statically defined networks; make automating network functions much easier; and allow for simpler provisioning and management of networked resources, everywhere from the data center to the campus or wide area network. Separating the control and data planes is the most common way to think of what SDN is, but it is much more than that, said Mike Capuano, chief marketing officer for Pluribus. “At its heart SDN has a centralized or distributed intelligent entity that has an entire view of the network, that can make routing and switching decisions based on that view,” Capuano said. “Typically, network routers and switches only know about their neighboring network gear. But with a properly configured SDN environment, that central entity can control everything, from easily changing policies to simplifying configuration and automation across the enterprise.” ... Typically in an SDN environment, customers can see all of their devices and TCP flows, which means they can slice up the network from the data or management plane to support a variety of applications and configurations, Capuano said.


Use of AI in wealth management must be applied smartly

“AI can offer a solution to these problems by helping to automate on-boarding processes, provide smarter access to data and create new customer experiences. However, it’s critical any implementation be undertaken smartly. It shouldn’t be a case of automating for automation’s sake. Because of this we see the use of AI best applied in small-steps. “This starts with automating and streamlining manual processes, such as onboarding a new client. This could include all forms of engagement from initial communications, anti-money laundering checks, risk profiling, and all the legal documentation in between. Additionally, by using intelligent information management solutions, staff have the means to simplify how they access, secure, process and collaborate on documentation. Doing so will aid productivity, enabling staff to find and access information across their systems much faster so they can build stronger relationships with their clients.


Security Is Key To The Success Of Industry 4.0

uncaptioned
There is often a perception among manufacturers that cloud computing is less secure than managing data on-site. The reality is that the opposite is true. Network security is closely related to physical access. After all, in an on-site server room, anyone could gain access , pop in a USB stick, and steal sensitive information. Conversely, cloud vendors store data in locations locked down with security guards and numerous physical barriers between any would-be hacker and the target server. Additionally, the cloud offers more network resilience. Businesses that rely on on-premise servers face exposure and operational risk during an act of force majeure, such as a fire or natural disaster. With the cloud, that risk is spread over multiple secure locations, significantly reducing the chance of disruption. Security is an ongoing concern; there will always be new vulnerabilities. Many of the biggest hacks – such as the Petya malware virus that first appeared in 2016 – targeted old Windows technology, which is why it is key to ensure the software is always up to date.


C-Suite: The New Main Target of Phishing

Evolving phishing attacks mean that criminals are continually looking for new ways to completely mask their malicious URLs, especially on mobile devices. They either hide them behind a page like Google Translate that users are already familiar with or completely trick users with custom web fonts and altered characters. One of the latest approaches is to create an Office 365 meeting invite that contains quiz buttons or a poll asking recipients to pick the topic or date for the next meeting; employees that end up clicking are presented with a fake Office 365 login page where they enter their O365 credentials and then lose control over their email account. Another approach is an email that comes from someone you know with a request to take a look at something for them. When you click on the link or attachment, malware installs on your system, takes over your email client, and then emails the same message from you to all your contacts. All is not lost, however. There is a way to help prevent and thwart these attacks. You need a security awareness program that instils a culture of security throughout your organization starting in the boardroom and leading by example.



While this bill remains on the House and Senate floor, there are some ways that state and local governments can begin securing their systems. The first step should be an audit, allowing key decision-makers to get on the same page about the status of their security. This audit should include secretaries of state, members of the academic community and all cybersecurity staff. Everyone should review the cybersecurity controls and the threat vectors that have been exploited in local systems. Improperly informed stakeholders are the greatest vulnerability. U.S. election security needs greater state-by-state alignment. These systems are managed by a hodgepodge of systems that vary from state to state, including paper ballots, electronic screens and Internet voting. Before local elections, midterms and the 2020 presidential election, state officials need to meet with their Boards of Elections and document their end-to-end election process with all of its systems, dependencies and interfaces.


Surviving the existential cyber punch

Top-notch organisations understand the threat environment well. They invest time and effort to maintain situational awareness as to who also values their information and could serve as a threat. They understand that threats may come from many vectors including the physical environment, natural disasters, or human threats. Further, they understand that human threats include such entities as vandals, muggers, burglars, spies, saboteurs, and careless, negligent or indifferent personnel in their own ranks. They invest in information sharing organisations, subscribe to threat information sources, and share their own observations as part of the Cyber Neighbourhood Watch construct. These organisations also know the importance of maintaining positive relationships with the cyber divisions of law enforcement organisations. Even before you have been attacked, your local cyber law enforcement organisation can serve as a rich source of threat intelligence that can help you better manage your cyber risk exposure.


Should that be a Microservice? Keep These Six Factors in Mind


If a module needs to have a completely independent lifecycle, then it should be a microservice. It should have its own code repository, CI/CD pipeline, and so on. Smaller scope makes it far easier to test a microservice. I remember one project with an 80 hour regression test suite! Needless to say, we didn’t execute a full regression test very often. A microservice approach supports fine-grained regression testing. This would have saved us countless hours. And we would have caught issues sooner. ... If the load or throughput characteristics of parts of the system are different, they may have different scaling requirements. The solution: separate these components out into independent microservices! This way, the services can scale at different rates. Even a cursory review of a typical architecture will reveal different scaling requirements across modules. Let’s review our Widget.io Monolith through this lens.


Strong security defense starts with prioritizing, limiting data collection

As cybercrime, user fraud and other security threats become more prevalent and detrimental, the ability to confidently know who you’re dealing with online has become ubiquitous, but what most companies tend to overlook is the responsibility and liability that they automatically assume when they collect and store personal data in order to validate their constituents. As a result, some businesses hold large volumes of personal data because they believe it’s necessary for comprehensive identity and credential verification, but this practice can be risky, especially for companies with weak or limited data protection protocols in place. Data breaches have costly repercussions, including loss of customers, compromised intellectual property, loss of brand trust and, of course, meaningful revenue declines that result, but regulatory penalties can be the most expensive of all consequences. For example, violating GDPR’s strict rules around data privacy can warrant fines of up to €20M, or 4 percent of the worldwide annual revenue of a company.


How botnets pose a threat to the IoT ecosystem 


Botnets are particularly challenging because they evolve over time and new forms constantly emerge, one of which is TheMoon. Benjamin tells Computer Weekly: “Threat researchers at CenturyLink’s Black Lotus Labs recently discovered a new module of IoT botnet called TheMoon, which targets vulnerabilities in routers within broadband networks.” Benjamin explains that a previously undocumented module, deployed on MIPS devices, turns the infected device into a Socks proxy that can be sold as a service. “This service can be used to circumnavigate internet filtering or obscure the source of internet traffic as a part of other malicious actions,” he says.  Attackers are using botnets such as TheMoon for a range of crimes, including credential brute forcing, video advertisement fraud and general traffic obfuscation. “For example, our team observed a video ad fraud operator using TheMoon as a proxy service, impacting 19,000 unique URLs on 2,700 unique domains from a single server over a six-hour period,” says Benjamin.


Cryptocurrencies Will Never Replace Us, Cries Romanian Central Bank Official

Daianu went on to defend the state’s role in issuing currency saying that it was the ‘only possible last-resort lender’. In this regard, the central bank official implied that during a financial crisis, only the state can save the situation: In markets, the state is the only possible last-resort lender. When the banking system was saved, it wasn’t crypto banks that were saved. Central banks intervened by issuing base currency, which was followed by non-conventional measures. This statement is likely to get Daianu in trouble with crypto enthusiasts as the unhindered printing of money is what spawned cryptocurrencies as we know them today. The central bank official also revealed that centralized institutions are yet to understand the importance of the deflationary approach cryptocurrencies such as Bitcoin have taken. This was demonstrated by his statement that the central banks’ answer to cryptocurrencies is to issue a digital currency that can ‘multiply’!



Quote for the day:


"And the trouble is, if you don’t risk anything, you risk more." -- Erica Jong


Daily Tech Digest - April 16, 2019

IT pursues zero-touch automation for application support


Automation is a top goal, from application conception -- or selection, in the case of a third-party business application -- through adoption and use. Executive-level management wants zero-touch automation that controls every application, all the IT resources it runs on and every step of every development and operations process. Zero-touch automation, sometimes called ZTA, covers two specific goals: Sustain an infrastructure that supports applications, databases and workers, and accurately automate application mapping onto IT infrastructure. The former is about analytics and capacity planning, and the latter facilitates terms such as DevOps and orchestration. DevOps, both as technologies and cultural changes that drive faster, better software delivery and operations, predates advances in cloud computing and virtualization. Development teams would build something and turn it over to operations to run, without consideration for the operational deployment requirements.


Nutanix powers Manchester City Council’s IT


The council assessed Nutanix, HPE SimpliVity, HPE Synergy and the VxRail appliance from Dell-EMC and VMware. Farrington says it elected Nutanix running a supermicro appliance because “Nutanix offered the closest to a silver bullet – we could get everything from a single vendor”. In Farrington’s experience, HCI gives the council greater flexibility than traditional IT infrastructure. One benefit is a distributed storage fabric with thin provisioning, which enables the council to make the most of its storage capacity. “We have the ability to scale quickly. The ability to add another storage and compute device quickly is beneficial,” he says. “We also benefit from the deduplication and compression services that are built in.” HCI has also provided a way to bring together the support teams for Windows servers and storage. “I had six teams to look after the datacentre facility,” says Farrington. “Historically, we had two teams – one looked after our 900 Windows servers, the other looked after storage and backup. ...”


Top 10 Features to Look for in Automated Machine Learning


Feature engineering is the process of altering the data to help machine learning algorithms work better, which is often time-consuming and expensive. While some feature engineering requires domain knowledge of the data and business rules, most feature engineering is generic. Look for an automated machine learning platform that can automatically engineer new features from existing numeric, categorical, and text features. You will want a system that knows which algorithms benefit from extra feature engineering and which don’t, and only generates features that make sense given the data characteristics. ... It’s quite standard for machine learning software to train the algorithm on your data. After all, you wouldn’t want to manually do Newton-Raphson iteration would you? Probably not. But, often there’s still the hyperparameter tuning to worry about. Then you want to do feature selection, to improve both the speed and accuracy of a model. Look for an automated machine learning platform that uses smart hyperparameter tuning, not just brute force, and knows the most important hyperparameters to tune for each algorithm.


Machine Learning Widens the Gap Between Knowledge and Understanding


Given how imperfect our knowledge has always been, this assumption has rested upon a deeper one. Our unstated contract with the universe has been that if we work hard enough and think clearly enough, the universe will yield its secrets, for the universe is knowable, and thus, at least, somewhat pliable to our will. But now that our new tools, especially machine learning and the internet, are bringing home to us the immensity of the data and information around us, we’re beginning to accept that the true complexity of the world far outstrips the laws and models we devise to explain it. Our newly capacious machines can get closer to understanding it than we can, and they, as machines, don’t really understand anything at all. This, in turn, challenges another assumption we hold one level further down: The universe is knowable to us because we humans (we’ve assumed) are uniquely able to understand how the universe works. At least since the ancient Hebrews, we have thought ourselves to be the creatures uniquely made by God with the capacity to receive His revelation of the truth.


How Azure uses machine learning to predict VM failures


On average, disk errors start showing up between 15 and 16 days before a drive fails, and in the last 7 days before it fails reallocated sectors triple and device resets go up tenfold. Behaviour and failure patterns vary from one drive manufacturer to another, and even between different models of hard drive from the same vendor. The telemetry for training the machine learning system has to be collected from different kinds of workloads, because that affects how quickly the failure is going to happen: if the VM is thrashing the disk, a drive with early signs of failure will fail fairly quickly, whereas the same drive in a server with a less disk-intensive workload could carry on working for weeks or months. Azure has a similar machine-learning system that predicts failures of compute nodes. In both cases, instead of trying to definitively predict whether a specific piece of hardware is failing, the systems rank them in order of how error-prone they are. The top systems on the list stop accepting new VMs and have running VMs live-migrated off onto different nodes, and then get taken out of service for testing.



SQL Server users could already run the database themselves on Google Cloud Platform (GCP) via VMs, but Google will fully manage the upcoming service through its Cloud SQL offering, which already features PostgreSQL and MySQL. Google's managed SQL Server service will support all editions of SQL Server 2017, which also has backward compatibility with older versions of the database, said Dominic Preuss, director of product management for Google Cloud, at the Cloud Next conference here this week. AWS has offered a similar service through its Relational Database Service for years. Moreover, Microsoft has worked since 2009 on its Azure SQL managed service. Microsoft's effort has endured some fits and starts over the years. Customers that wanted to move very large SQL Server databases to the cloud had to run them on Azure's VM-based service or break them apart into multiple pieces, given Azure SQL's size limitations.


How to deal with backup when you switch to hyperconverged infrastructure

continuity data backup network server banking
Each HCI vendor offers a hardware configuration using components supported by the virtualization vendors it wishes to support. Since the system comes pre-built you can be assured that all the hardware components will work together and will work with any supported hypervisors. Any incompatibilities between the various components will be handled by the HCI vendor. Some HCI vendors also offer their own hypervisors. The best example of this would be Nutanix with their Acropolis hypervisor. Typically such a hypervisor will offer tighter integration with the HCI hardware and integrated data-protection features. Often, the built-in hypervisor is also less expensive than traditional hypervisors, especially if you take advantage of the native data-protection features. The final type of HCI vendor supports neither VMware nor Hyper-V, nor do they use their own hypervisor. Scale Computing uses the KVM hypervisor, which is open source. Like Nutanix, they do this to reduce their customers’ TCO while offering much of the same functionality that VMware offers. In addition, they also offer integrated data protection.


How AIOps Supports a DevOps World


AIOps can also automate workflows for alerts that require escalation, human attention and/or investigation. For example, alerts on devices supporting business-critical IT services require notification of Level 1 support staff within five minutes of alert receipt. If the alert is from a server and for a specific application, an IT or DevOps user will need to create an incident and route it to the relevant application team. AIOps takes care of this immediately with alert escalation workflows that help program first-response actions for notification and incident creation. Again, this can occur completely unsupervised – no human interaction required – once these policies are established. What’s more, policy-driven AIOps correlates dependencies based on downstream resources or establishes an algorithm-based correlation to address groups of alerts continuously. This drastically frees up time that is typically spent sifting through alert floods, figuring out what to do with them, and then doing it. Advanced AIOps tools use native instrumentation to determine how frequently specific alert sequences occur.


Doing continuous testing? Here's why you should use containers


As nearly every software tester has experienced, test environments are a mixed blessing. On one hand, they allow end-to-end tests that would otherwise have to be executed in production. Without a test environment, testing teams would be shipping code that hasn't been tested across functional boundaries out to users—and hoping for the best. A well-configured and maintained test environment, one that closely mimics production and contains up-to-date code deployments, can provide a safe and sane way for testers to validate a scenario before it gets into the hands of a customer. Problematically, however, test environments encourage a mode of development that is fast becoming outdated: long integration cycles, an untrustworthy main source trunk, and late-stage testing. The most productive, highest-performing engineering teams do just the opposite. They need to be able to trust that code in the main trunk could go to production at any time. They often shift left on quality, with the majority of testing happening before a code change even lands.


Kotlin Multiplatform for iOS Developers


KMP works by using Kotlin to program business logic that is common to your app's various platforms. Then, each platform's natively programmed UI calls into that common logic. UI logic must still be programmed natively in many cases because it is too platform-specific to share. In iOS this means importing a .frameworkfile - originally written in KMP - into your Xcode project, just like any other external library. You still need Swift to use KMP on iOS, so KMP is not the end of Swift.  KMP can also be introduced iteratively, so you can implement it with no disruption to your current project. It doesn't need to replace existing Swift code. Next time you implement a feature across your app's various platforms, use KMP to write the business logic, deploy it to each platform, and program the UIs natively. For iOS, that means business logic in Kotlin and UI logic in Swift. The close similarities between Swift's and Kotlin's syntax greatly reduces a massive part of the learning curve involved with writing that KMP business logic.



Quote for the day:


"To double your net worth, double your self-worth. Because you will never exceed the height of your self-image." -- Robin Sharma


Daily Tech Digest - April 15, 2019

The Staying Power of Legacy Systems

Image: Senticus - stock.adobe.com
As strange as it might seem, we migrated our environment away from these servers, and opted instead to run our Linux systems on an IBM mainframe, even though we didn’t use the IBM native zOS operating system itself,” said the CEO. “The mainframe-resident systems were able to deliver the five nines uptime we were promising our customers, and when we had problems, the vendor’s support was swift and responsive. ... If you are a new company without an investment in legacy systems, you can look at any solution in the IT marketplace, whether it is legacy or not. But for most companies, the decisions on hardware and software will come down to a “best in class” choice that considers the platforms companies are already running on, and where their companies need to be with their IT in the next 10 to 20 years. In this environment, new vendors with innovative solutions will continue to attract market share, but at the same time best of class legacy systems will continue to be attractive, because they have done anything but stand still. Most legacy systems now come in cloud as well as in in-house implementations. Most legacy systems also have provisions for integration with or add-ons for Web-facing and social media apps.



How to avoid software outsourcing problems

How to avoid software outsourcing problems image
The importance of choosing the correct outsourcing partner simply cannot be overstated. Working with an experienced and well-regarded software outsourcing company has helped many companies expand beyond their initial startup stage, rapidly adjust to market pressures, and bring custom software to the market while maintaining their agility as a growing organization. The best outsourcing partners will provide assistance through every aspect of the software development cycle, helping their clients conceptualize, execute, and bring their software to market. However, working with poor software outsourcing companies can be counterproductive. It can lead to massive cost overruns, harm company morale, and lead to numerous missed deadlines as they struggle to fix their own mistakes. In addition, all of this frustration may be for naught if the final software reflects their haphazard approach and lack of attention to detail. This article will help companies avoid these pitfalls by identifying the 8 most common outsourcing problems, as well as their solutions.


How DataOps helps organisations make better decisions


Making it easier for people to work with data is a key requirement in DataOps. Nigel Kersten, vice president of ecosystem engineering at Puppet, says: “The DataOps movement focuses on the people in addition to processes and tools, as this is more critical than ever in a world of automated data collection and analysis at a massive scale.” DataOps practitioners (DataOps engineers or DOEs) generally focus on building data governance frameworks. A good data governance framework – one that is fed and watered regularly with accurate de-duplicated data that stems from the entire IT stack – is able to help data models to evolve more rapidly. Engineers can then run reproducible tests using consistent test environments that ingest customer data in a way that complies with data and privacy regulations. The end result is a continuous and virtuous develop-test-deploy cycle for data models, says Justin Reock, chief architect at Rogue Wave, a Perforce Company. “At the core of all modern business, code is needed to transport, analyse and arrange domain data,” he says.


Artificial Intelligence: A Cybersecurity Solution or the Greatest Risk of All?

Arrangement of outlines of human brain technological and fractal elements on the subject of artificial intelligence as a cybersecurity solution
AI can also become a real headache for cybersecurity professionals around the globe. Just as security firms can use the tech to spot attacks, so can hackers in order to launch more sophisticated attack campaigns. Spear phishing is just one example out of many, as using machine learning tech can allow cybercriminals to craft more convincing messages intended to dupe the victim into giving the attacker access to sensitive information or installing malicious software. AI can even help in matching the style and content of a spear phishing campaign to its targets, as well as enhance the volume and reach of the attacks exponentially. Meanwhile, ransomware attacks are still a hot topic, especially after the WannaCry incident that reportedly cost the British National Health System a whopping £92 million in damages – £20 million during the attack, between May 12 and 19, 2018, and a further £72 million to clean and upgrade its IT networks – and meant that 19,000 healthcare appointments had to be cancelled.


Build A Strong Cybersecurity Posture With These 10 Best Practices

uncaptioned
When you plan to overhaul your cybersecurity infrastructure, it’s important to keep the weakest link in mind: the people in your organization. Yes, you should invest in the right technology that takes your network and endpoint security to the next level, but make sure your organization’s workforce is aware of the cyberthreats they face and how they must address these threats. Conduct security awareness training programs that establish a culture of cybersecurity awareness. ... When it comes to cyberattacks, it is not a matter of if they will happen, but when they will happen. Prevention is definitely better than cure, but if your organization does experience an attack, it is important to understand how it happened, how it unfolded and the vulnerabilities it was able to exploit. Root cause analysis can help you find the cause and plug key vulnerabilities. ... What if an attacker manages to fly under the radar and your resource-constrained IT team fails to identify a data breach in progress? Such disastrous consequences can be avoided if the threat gets identified proactively.


How to be an edgy CIO

world map network server data center iot edge computing
Edge computing is the delivery of computing infrastructure that exists as close to the sources of data (logical extremes of a network) designed to improve the performance, operating cost and reliability of applications and services. Edge computing reduces network hops, latency, and bandwidth constraints by distributing new resources and software stacks along the path between centralized data centers and the increasingly large number of devices in the field. By shortening the distance between devices and cloud resources that serve them, edge computing ultimately turns massive amounts of machine-based data into actionable intelligence. In particular, but not exclusively, in close proximity to the last mile network, on both the infrastructure and device sides. The word “edge” refers specifically to geographic distribution. While edge computing is a form of cloud computing, it works differently by pushing data processing to the literal “edge” devices for computing, not relying on the centralized data center to do all the work. This complementary computing system frees up bandwidth pressure since data no longer has to be constantly pushed back and forth to the data center.


Increasing trust in Google Cloud: visibility, control and automation


Your first line of defense for cloud deployments is your virtual private cloud (VPC). VPC Service Controls, now generally available, go beyond your VPC and let you define a security perimeter around specific GCP resources such as Cloud Storage buckets, Bigtable instances, and BigQuery datasets to help mitigate data exfiltration risks. As you move workloads to the cloud, you need visibility into the security state of your GCP resources. You also need to be able to identify threats and vulnerabilities so you can respond quickly. Last year, we introduced Cloud Security Command Center (Cloud SCC), a comprehensive security management and data risk platform for GCP. Cloud SCC is now generally available, offering a single pane of glass to help prevent, detect, and respond to threats across a broad swath of GCP services. As part of GA, we’re excited to announce the first set of prevention, detection, and response services that can help you uncover risky misconfigurations and malicious activity:


The Single Cybersecurity Question Every CISO Should Ask

Today, every organization – regardless of industry, size, or level of sophistication – faces one common challenge: security. Breaches grab headlines, and their effects extend well beyond the initial disclosure and clean-up. A breach can do lasting reputational harm to a business, and with the enactment of regulations such as GDPR, can have significant financial consequences. But as many organizations have learned, there is no silver bullet – no firewall that will stop threats. They are pervasive, they can just as easily come from the inside as they can from outside, and unlike your security team, who must cover every nook and cranny of the attack surface, a malicious actor only has to find one vulnerability to exploit. ... In a world in which security and IT operations are often at odds, this may seem counterintuitive, but the truth is what SecOps calls "the attack surface" is what IT ops calls "the environment." And no one knows the enterprise environment – from the data center to the cloud to the branch and device edge – better than the team tasked with building and managing it.


Capitalising on the power of modern data sharing image
While there are undoubted benefits to data sharing, for too long, organisations have relied on legacy technologies, such as outdated big data platforms or on premises data warehouses to manage their data, which have been ill-equipped to meet modern data requirements. With the number of data access points available, legacy tech has been unable to handle large datasets, especially as the velocity, variety and volume of data continues to grow. Simple querying of data would take days or even weeks to generate on traditional on premises technology, posing a real issue in getting immediate answers. This has meant that while internal data is easier to access, external data has been far more difficult. Thankfully, the birth of cloud-built data warehouses is helping alleviate much of these struggles and helping organisations capitalise on the data sharing economy. This fits hand-in-hand with the natural progression for organisations’ growing adoption of cloud infrastructures, with 85% of organisations expected to adopt cloud technologies by 2020 — according to a survey from McAfee.


Build a Monolith before Going for Microservices: Jan de Vries at MicroXchg Berlin

Designing a system using one silo or service for each business function is what De Vries prefers, which means that each function becomes a command or request handler handling everything needed for the function. Often there is a need for services to share some data, but instead of using synchronous calls between services, he recommends sending messages using some type of message bus. Then each service can read the messages it needs irrespective of which service is sending them. One benefit from isolating different parts like this is that they can use different types of technological stacks and data storages depending on the need. De Vries points out though that just because you can, it doesn’t mean you must. He is a proponent for keeping it simple and prefers using one single technological stack, unless there is a good reason to step out to something else. If you aren’t sharing any business logic you will probably end up with a lot of duplicated code. We have been taught that duplicated code is bad (DRY); instead, we should abstract the duplication away in some way.



Quote for the day:


"To know what people really think, pay regard to what they do, rather than what they say." -- René Descartes