Daily Tech Digest - August 07, 2018

Disentangling The Data Centre ‘Skills Shortage’ Conundrum?


In principle, a skills shortage appears where there is a mismatch of the capabilities available to role vacancies. We certainly have that, but compounding this issue is the physical lack of people. Genuine skills shortages can be resolved by retraining the people available to work in available roles. It’s a pretty simple equation. Find people, train or retrain and employ them. Vocational and specific training programs resolve this issue in a highly effective manner and should not be discounted as part of a broader response. This is particularly so where existing labour forces are provided with the vocational skills to keep up with changes associated with technology, customer demand or process shifts, for example. Sadly for the data centre sector, we have an underlying labour shortage too. We simply do not have enough people coming into the sector to train into the roles available or to keep up with expected shifts in demand. We have both skills AND labour shortages. Each one demands a different suite of interventions and this is just where the complexity starts.



What’s the difference between a BCMS and a BCP?

Organisations and regulators don’t often agree on how businesses should be run, but lately both have championed the adoption of business continuity – a method that enables organisations to keep functioning during an incident, and address the prevention of and response to disruptions. Business continuity has proved essential in the modern landscape, with the number of cyber attacks on the rise and the amount of information being stored by organisations growing rapidly. But for all the agreement over the importance of business continuity, there is one area of disconnect. Some organisations have adopted a BCMS (business continuity management system) and others a BCP (business continuity plan). This might sound like it’s two names for the same thing, but there’s an important difference. ... It’s possible to have a BCP but not a fully-fledged BCMS. That’s because there are further steps to a BCMS after the plan is in place – namely: developing, testing and reviewing the BCP. Completing these steps obviously involves a bigger investment in time and resources


4 Artificial Intelligence Use Cases That Don’t Require A Data Scientist


Today, your IT operations team likely spends a huge amount of time and mental energy tending to performance thresholds—for example, when an application slows down too much, the system generates an alert. But as the application code, the configurations, or the infrastructure change, the ops team must constantly reset and manage those thresholds. The amount of monitoring data generated is also growing significantly, which means the IT ops team is doing a lot of work just managing logs, which provide the data for setting thresholds. A better way is to put all the web, application, and database performance data, the user experience data, and the log data into one cloud-based data platform. Then let that system—using baseline-setting algorithms in machine learning—learn what the thresholds should be. With the baseline established, another technique called anomaly detection can identify when application performance is trending toward these thresholds, and trigger alerts with suggested corrective actions or automatically take corrective action.


Raspberry Pi and machine learning: How to get started

Although the relatively low-specced Pi isn't an obvious choice for machine learning, the board's compact size and low power consumption mean it's well suited to building mobile homemade gadgets and robots. Machine learning can help these devices handle new tasks, using image recognition to "see" and speech recognition to "hear". However, there are definite limits to the Pi's ML capabilities. There are two main stages to machine learning, training, during which the model learns how to perform a given task, and inference, when the trained model is used to perform that task. The Pi's limited processing power means it's not suitable for training anything but the simplest machine-learning models. Instead this stage is typically carried on a machine with at least a mid- to high-end GPU. However, the Pi is capable of performing inference, of actually running the trained machine learning model, albeit rather slowly.


How Connected Cars And Insurance Are Influenced By Big Data

big data and connected cars
Many insurance carriers deploy the acquired customer driving patterns and put forth insurance premium rates accordingly. The likes of premium discounts emanating out of driving behavior, mileage, and other metrics are slowly becoming realities in this highly innovative arena. There are a host of other evaluating models like PAYD, PHYD and MHYD; which are postulated as different versions of the Usage Based Insurance plan. Each one of these models target a specific driving metric; thereby offering insurance premium rates by analyzing the quality of the concerned driver. With premiums directly related to the driving performance, the connected cars can blur the lines between vehicle usage and customer privacy as anything and everything inside the vehicle can be tracked, rather seamlessly. ... With the applications growing in large numbers, a relatively stronger ecosystem is being created around the connected cars. The participants include sensor manufacturers, telecommunication firms, insurance companies, and even the automakers; with each one connected to the other by the threads of Big Data.


World's first four-bit 4TB SSDs for consumer devices coming this year

The downside of moving up to four bits per memory cell, according to Samsung, is that makes it harder to maintain a device's performance and speed because the extra density would cause the electrical charge to fall by as much as half. However, Samsung says its new SSDs are on par with the performance of its three-bit SSDs, achieved by using a three-bit SSD control, its TurboWrite technology, and boosting capacity by using 32 chips based on its 64-layer fourth-gen 1TB V-NAND chip. Samsung boasts that its QLC SSDs will improve efficiency for consumer computing, including in smartphone storage where the 1TB four-bit V-NAND chip will allow it to efficiently churn out 128GB memory cards for smartphones. ... Samsung is planning on releasing four-bit consumer SSDs later this year with 1TB, 2TB, and 4TB capacities in the widely used 2.5-inch form factor. As Samsung notes, this is a massive step up from the 32GB one-bit SSD it launched in 2006, followed by its two-bit 512GB SSDs in 2010, and three-bit or triple-level cell SSD in 2012.


Consumer Sentiments About Cybersecurity and What It Means for Your Organization


While suffering a data breach is never ideal, the survey also shows that honesty, transparency and a timely emergency response plan is critical. Companies must clearly communicate that a breach has occurred, those likely impacted and planned remediation actions to address the issue. Organizations that don’t admit to compromised consumer records until long after the breach took place suffer the greatest wrath from consumers. Successful organizations must create a secure climate for customers by embracing technology and cultural change. Security threats and data breaches can seriously impact a customer’s loyalty, thereby damaging the corporate brand, increasing customer churn, and incurring lawsuits. Corporate leaders must recognize the multiple pressures on their organizations to integrate new network technologies, transform their businesses and to defend against cyberattacks. Executives that are willing to embrace technology, cultural change and prioritize cybersecurity will be the ones to win the trust and loyalty of the 21st century consumer.


Adapting Blockchain for GDPR Compliance

Perhaps the most interesting — and most controversial article — related to Blockchain’s applicability to GDPR is Article 25, “Data protection by design and by default,” which addresses pseudonymization techniques for consumers’ stored data. Hashing is Blockchain’s pseudonymization technique, and there are two critical interpretations for the pseudonym linkage using Blockchain relative to Article 25. The first one states that because data pseudonymization is accomplished in Blockchain hashing, but not anonymization, the data linkage is no longer considered personal when it is established, and if this linkage is deleted, it also complies with Article 17. However, the second interpretation is that pseudonymization, even with all cryptographic hashes, can still be linked back to the original PII data. There still may, however, need to be some mathematical proof that brute-force cyberattack of off-chain data linkage using hashing can compromise this assumption.


BGP hijacking attacks target payment systems


Justin Jett, director of audit and compliance for Plixer, said BGP hijacking attacks are "extremely dangerous because they don't require the attacker to break into the machines of those they want to steal from." "Instead, they poison the DNS cache at the resolver level, which can then be used to deceive the users. When a DNS resolver's cache is poisoned with invalid information, it can take a long time post-attacked to clear the problem. This is because of how DNS TTL works," Jett wrote via email. "As Oracle Dyn mentioned, the TTL of the forged response was set to about five days. This means that once the response has been cached, it will take about five days before it will even check for the updated record, and therefore is how long the problem will remain, even once the BGP hijack has been resolved." Madory was not optimistic about what these BGP hijacking attacks might portend because of how fundamental BGP is to the structure of the internet.


The 14 soft skills every IT pro needs

“Great knowledge in a vacuum doesn’t benefit an organization,” says Wilgus. “Every IT project — and position — is going to conclude with a deliverable, for example a design document, presentation, attestation report or updated code base. Without the necessary soft skills, the intended message being expressed in the deliverable could be lost. Candidates that have presented at conferences, or have been published, will have a leg up on other candidates. ... If there are errors in a two-page resume, what’s the likelihood this candidate can produce a formal report of more substantial length? Candidates should expect hiring organizations will ask for a writing sample." ... “Active listening is the process of reflecting back not only what you hear the other person saying but also to validate and verbalize the nontechnical aspects of the conversation,” Adato says. “This is one way to demonstrate emotional intelligence. Leveraging this technique gives the individual speaking the opportunity to clarify, while simultaneously demonstrating that this information matters to you personally.”



Quote for the day:


"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boyer


Daily Tech Digest - August 06, 2018

How quantum computers will destroy & save cryptography

quantum
To crack most current public key encryption, it would take a quantum computer with at least 4,000 perfect qubits or many times that number if the qubits were imperfect. How close are we to a perfect 4,000 qubits? It depends on who you ask. Dr. Jackson is confident that we’ll have perfect 4,000-qubit quantum computers in the next five years. He has some evidence to support his claim, although we are nowhere near 4,000 perfect qubits. In March 2018, Google announced an imperfect 72-qubit computer. Google’s current (publicly known) implementation makes a mistake about once every 200 calculations. When you’re doing billions of calculations a second, that error rate is an unusable disaster. Tens if not hundreds of billions of dollars are being spent around the world trying to make more stable quantum computers. Some say that the jump needed to get to 4,000-qubits is not as daunting as it once was. Dr. Jackson, who is directly working with quantum computers says, “We have gone from nine to 72 qubits in just one year, so it’s not crazy at all that we could get 4,000 in another five [years]. Given that the US government finally got on board a few months ago, I think that’s now a conservative estimate.”



Evaluating Hyperledger Composer


Hyperledger Composer allows you to write smart contracts in server-side JavaScript. It makes available a native client library by which Node.js applications can access the ledger and submit transactions to these smart contracts. For the purposes of this experiment, I used an already developed Node.js microservice as the control. I copied the source code for that microservice to a new folder then I replaced all references to MySQL, Redis, and Cassandra with calls to the Hyperledger Composer client API. It is the feed7 project that serves as the test in this experiment. Both projects use Elasticsearch because one of the requirements of each news-feed service is a keyword-based search, and a blockchain is not appropriate for that. Like most of the other microservices in this repo, the feed7 microservice uses Swagger to define its REST API. The specification can be found in the server/swagger/news.yaml file. With Hyperledger Composer, you create a business network that consists of a data model, a set of transactions that manipulate the data model, and a set of queries by which those transactions can access data within the model.


Mastering MITRE's ATT&CK Matrix

Originally developed to support Mitre's cyberdefense work, ATT&CK is both an enormous knowledge base of cyberattack technology and tactics and a model for understanding how those elements are used together to penetrate a target's defenses. ATT&CK, which stands for Adversarial Tactics, Techniques, and Common Knowledge, continues to evolve as more data is added to the knowledge base and model. The model is presented as a matrix, with the stage of event across one axis and the mechanism for that stage across the other. By following the matrix, red team members can design an integrated campaign to probe any aspect of an organization's defense, and blue team members can analyze malicious behavior and technology to understand where it fits within an overall attack campaign. Mitre has defined five matrices under the ATT&CK model. The enterprise matrix, which this article will explore, includes techniques that span a variety of platforms. Four specific platforms — Windows, Mac, Linux, and mobile — each have their own matrix.


Intelligent transportation: The most important pillar of a smart city?

Intelligent transportation: The pillar of smart cities? image
Intelligent transportation must be a first step in the smart city movement. This could include monitoring traffic patterns, highly trafficked pedestrian areas, metro stations, coordinating train times and much more. When cities host large events that increase traffic and security concerns, it becomes increasingly clear that any smart city initiative must begin with intelligent transportation. Intelligent transportation can improve overall situational awareness while enhancing interoperability and the ability to share information quickly. It provides a holistic approach to risk management as it fortifies emergency preparedness and response capabilities for cities, including fare evasion, vandalism or violence, medical emergencies, track obstructions and other similar types of disruptive events. ... Rather than focusing on the short-term fixes, cities and states can find holes where smart transportation solutions can resolve major issues. This leads to improved traffic flow, better roads, and can even help support law enforcement by identifying safety hazards and where cameras should be installed.


The biggest data breaches in the ASEAN region

Hacker stealing data
When it comes to data breach control, the prospects are even gloomier. Whereas Philippines or Indonesia require that data controllers notify promptly affected users in the case of a data breach, Thailand, Brunei or Malaysia don’t have specific notification requirements in this particular scenario. This makes more difficult to know the real extent of actual data breaches in those countries as most of them would go unreported. Since the lack of sector-specific governance and policies is a problem around the whole region, ASEAN could benefit of a coordinated approach similar to the one implemented in the European Union (EU). In 2013 the EU developed a Cybersecurity Package, a region-wide cybersecurity strategy to “enhance the EU’s overall performance” and to “safeguard an online environment providing the highest possible freedom and security for the benefit of everyone.” The package was reviewed last year and marks a milestone in the fight against cybercrime in the union. Below we have compiled a list of the most serious data breach incidents in the ASEAN region during the past few years.


Lessons From The Amazon Ecosystem

The ARM Holdings design ecosystem is a set of relationships between major mobile device vendors, silicon manufacturers and chip designers, usually organised by Arm. It’s a highly specialised and effective ecosystem that can push chip design in novel directions and move designs to manufacture quickly because of the inclusion of silicon factories and smartphone makers. Very few ecosystems work this way. In fact the Amazon ecosystem in books is trying to optimise the Amazon platform rather than optimise or maximise market opportunity and customer success. Take the role of book arbitrage (mid-right in blue in the diagram above). In essence this means finding books deep in the Amazon catalogue, buying them cheaply, and then using more effective descriptions to sell them, also on Amazon, at a higher price. It makes up for Amazon’s indiscriminate search engine and the poor product descriptions of most booksellers. It pays the Amazon ecosystem to get any good enough product to market to tap into the long tail, at a very low price (99 cent novels). That is output rather than outcome; it is a product that does not necessarily please customers as much as might be possible at a lower volume of publishing.


By 2020, 1-in-5 healthcare orgs will adopt blockchain; here’s why

intro healthcare technologies for ma
While there is some degree of network interoperability between healthcare providers, pharmacies and insurance companies through various frameworks like HIEs, they've had "varying degrees of success and penetration," IDC said. It cited innate shortcomings that include "limitations in the interoperability standard or protocol itself, workflow and policy differences between entities, information blocking, and technology requirements." Two leading HIEs – CommonWell Health Alliance, a trade association working toward healthcare record interoperability, and Carequality, a public-private collaborative created to establish a common interoperability framework – have had success in establishing a solid industry foundation for data exchange with the backing of EHR vendors. "And that's facilitating a somewhat limited form of query-based [data] exchange," said Mutaz Shegewi, IDC's research director for provider IT transformation strategies. Shegewi was referring to the ability to search for secured patient information online.


IT Managers: Are You Keeping Up with Social-Engineering Attacks?

Using both high-tech tools and low-tech strategies, today's social-engineering attacks are more convincing, more targeted, and more effective than before. They're also highly prevalent. Almost seven in 10 companies say they've experienced phishing and social engineering. For this reason, it's important to understand the changing nature of these threats and what you can do to help minimize them. Today's phishing emails often look like exact replicas of communications coming from the companies they're imitating. They can even contain personal details of targeted victims, making them even more convincing. In one incident, bad actors defrauded a U.S. company of nearly $100 millionby using an email address that resembled one of the company's vendors. And in the most recent presidential election, hackers used a phishing email that appeared to come from Google to access and release a top campaign manager's emails. Bad actors can get sensitive data in many other ways. In one case, they manipulated call-center workers to get a customer's banking password.


NVME SSDs, The Insanely Fast Storage You Want In Your PC

nvme ssd primary intel optane ssd 905p
It’s possible to add an NVMe drive to any PC with an PCIe slot via a $25 adapter card. All recent versions of the major operating systems provide drivers, and regardless of the age of the system you will have a very fast drive on your hands. But there’s a catch. To benefit fully from an NVMe SSD, you must be able to boot the operating system from it. That requires BIOS support. Sigh. Most older mainstream BIOSes do not support booting from NVMe and most likely, never will. ... While just about any NVMe should make your system feel quicker, they are not all alike. Not even close. Where Samsung’s 970 Pro will read at over 3GBps and write at over 2.5GBps, Toshiba’s RC100 reads at 1.2GBps and writes at just under 900MBps. The difference can be even greater when the amount of data written exceeds the amount of cache on board.  A number of factors that affect performance, including the controller, the amount of NAND on board, the number or PCIe lanes (see above), and the type of NAND.


Strong governance programs separate data lakes from swamps

A good data governance framework combined with a data catalog can keep a data lake pristine by cleaning up the disorderly swamp of data. A data catalog offers a single source of intelligence for data experts and other data users who need quick access to their data. Users can tag, document, and annotate data sets in the data catalog, continuously enriching the data and increasing the value of existing data assets while also eliminating data silos. A data catalog enables users to collaborate to understand the data’s meaning and use, to determine which data is fit for what purpose, and which is unusable, incomplete, or irrelevant. It provides a way for every user to find data, understand what it means, and trust that it’s correct. Businesses today are either building a brand new lake, or cleaning up an existing data lake. Whether you’ve inherited a swamp, or are just starting out and want to keep your data lake pristine, establishing a set of policy-driven processes can help you avoid these four common data lake problems



Quote for the day:


"Leadership in the past was a model of direction and control. Now it should help people set directions for the future and facilitate their delivery." -- John Bailey


Daily Tech Digest - August 05, 2018

Enterprise Infrastructure Management Requires the Right Strategy for Success


The first step is to simplify the environment. Beyond just the four categories outlined above, IT organizational and cultural shifts need to change. Streamlining the environment takes planning, time and effort. Look for solutions and approaches that further simplify the environment. At the same time, consider how these changes impact your processes and organizational structure. Not all of the changes will be based in technology. As the demands of your customers change, so will your organization and processes. Look for opportunities to address technical debt and remove old or un-needed processes. These two steps alone go a long way toward simplification. Part of simplification includes the introduction of automation. In the past, organizations faced the fact that they had to do everything themselves. This was partly due to a lack of mature and sophisticated solutions along with the ability to add more people to resolve issues. Today, that approach simply is no longer feasible. Humans cannot keep up with the rate of change. Solutions are far more mature and sophisticated than those of the past.



AI, Machine Learning, and the Basics of Predictive Analytics for Process Management

There are certain machine learning applications where you can achieve a high accuracy. If you’re doing image processing, and you use deep learning – a type of machine learning – and it’s trying to identify, “Is this a picture of a cat or is it a picture of a dog?” It turns out that just like humans, computers can also do that very well if given the right training data and you apply the right machine learning methods. There are also things out there that, regardless of how advanced the machine is, or how intelligent the human is, neither the machine nor the human can make accurate predictions about exactly which customer is going to cancel. But what you can do is draw the trends and assign probabilities. That’s the job of the predictive model: to assign probabilities of who is more or less likely to show whatever outcome or behavior you’re trying to predict. So you determine what would be helpful to predict, and then you find out, “I can’t predict accurately, but wow, I can predict a lot better than guessing.” Probably, in many cases, better than any human could because of all of this data at the computer’s disposal.


Cybersecurity stocks savaged for a second week as Symantec results disappoint


Symantec shares finished the week down 7.1% at $19.25, after a 7.8% decline Friday. Of the 29 analysts who cover Symantec, two have buy ratings on the stock, 25 have hold ratings, and two have sell ratings. Following earnings, analysts’ average share-price target fell to $21.05 from $23.36, according to FactSet data. Cowen analyst Gregg Moskowitz, who has an underperform rating on the stock, called it “another highly disappointing quarter” for Symantec. Jefferies analyst John DiFucci, who has a hold rating, said the company faces notable challenges in its enterprise business, namely its SEP 14 endpoint protection product and the Blue Coat Secure Web Gateway business. In a note, DiFucci said “in endpoint, the company faces a multitude of private upstarts as competitors offering modern solutions that are competitive with SEP 14. Similarly, in the Secure Web Gateway market, the company continues to face direct competition from companies such as Zscaler and iboss, and indirect competition from the next-generation firewall vendors offering URL filtering functionalities that are considered ‘good enough’ to meet the needs of some enterprises.”


GDPR: What's really changed so far?

While some users will have chosen to give their consent, many will have withdrawn it and others may not have been able to explicitly give it as emails were lost in old in-boxes or junk mail folders -- for organisations, that led to the same result as opting out. "The opt-in environment can only have reduced business volume in the activity of direct marketing -- it can't have made it go up, it can only make it go down," said Stewart Room, lead partner for GDPR and data protection at PwC. "What it has done is it's increased awareness. There was more outreach done on data protection in the months of May and June 2018 in Europe than has ever been done in the entirety of the world in the history of data protection," said Room. While there's a focus on organisations like Facebook and Google which are well known for using data as a product for generating revenue, they're far from the only ones which have been hit by GDPR.


‘Moneyball’ing data – A closer look at how churn and propensity models work


So how does a propensity to buy model work? Similar to the churn model, it looks at past behavior, attributes, demographics, sales data, etc. of the best customers in your training data that you want more of. For example, there is a set of thousand customers that are your real cash cows and spend $1000+ on your merchandise every month. This becomes the protagonist that you are going to refer to and compare the rest of your training data set with. Let’s say that one of the patterns that the model detected was that majority of the customers that bought $1k+ merchandise were loyal to one specific brand in your store. This purchase pattern becomes a base for you to start marketing to others that have bought that specific brand but are in the $700 per month bucket. (What do you market to them? Look at the basket of the $1k+). This is just one example. Propensity models can slice and dice your data to look at attributes, behavior, and patterns that might be so counterintuitive that a human can never see a connection between them.


The impact of cloud migration strategies on security and governance

Most of these decisions are about governance and risk management. With lift and shift, the application functionality is pretty clear, but bringing that out to the cloud introduces data risks and technical risks. Data controls may be insufficient, and the application’s architecture may not be a good match for cloud, leading to poor performance and high cost. One group of SaaS applications stems from ‘shadow IT’. The people that adopt them typically pay little attention to existing risk management policies. These can also add useless complexity to the application landscape. The governance challenges for these are obvious: consolidate and make them more compliant with company policies. Another group of SaaS applications is the reincarnation of the ‘enterprise software package’. Think ERP, CRM or HR applications. These are typically run as a corporate project, with all its change management issues, except that you don’t have to run it yourself.


Oracle vs. Hadoop


Image title
Despite sophisticated caching techniques, the biggest bottleneck for most Business Intelligence applications is still the ability to fetch data from disk into memory for processing. This limits both the system processing and it’s ability to scale — to quickly grow to deal with increasing data volumes. As there’s a single server, it also needs expensive redundant hardware to guarantee availability. This will include dual redundant power supplies, network connections and disk mirroring which, on very large platforms can make this an expensive system to build and maintain. Compare this with the Hadoop Distributed Architecture below. In this solution, the user executes SQL queries against a cluster of commodity servers, and the entire process is run in parallel. As effort is distributed across several machines, the disk bottleneck is less of an issue, and as data volumes grow, the solution can be extended with additional servers to hundreds or even thousands of nodes. Hadoop has automatic recovery built in such that if one server becomes unavailable, the work is automatically redistributed among the surviving nodes, which avoids the huge cost overhead of an expensive standby system.


10 Dark Web warning signs that your organization has been breached

In the wake of seemingly constant high profile breaches, organizations are taking precautions to protect against cyberattacks, including raising security budgets and educating employees. However, the cost of a breach can be enough to significantly harm a company's finances and reputation: The average total cost of a data breach is $3.86 million, according to a recent Ponemon Institute report. The ongoing risk of attack has led some organizations to seek new ways to proactively monitor the Dark Web for lost or stolen data, according to a Wednesday report from Terbium Labs. ... Dark Web and clear web sites like Pastebin are a dumping ground for personal, financial, and technical information with malicious intent, the report said. There is often a motivation behind these posts, such as political beliefs, hacktivism, vigilantism, or vandalism. For example, the executive of a wealth management firm was included in a large-scale dox as the result of their political contributions, the report noted.


Agile: Reflective Practice and Application


By focusing on your own local efficiency, which can lead to focusing on what is not needed, can mean at best doing nothing for the larger system and at worst making the larger system less efficient. The obsession with coding efficiency in particular kills a great many software products. I see teams actually proud of a growing pile of stories in need of testing, or a team dedicated to front-end UI, proud of having endless features complete against mocks and how the back-end teams can’t keep up. Sadly, these teams seem oblivious to the fact that they are not adding value to the system. Let me give an example that a friend of mine shared recently: My friend was baking some cakes for an event and needed to bake 10 cakes, but only had one oven. Her husband offered to help out and so she asked him to measure out the ingredients for the cakes in advance so that it would be quicker to get the next cake in the oven. When she came to get the ingredients for the cake, they were not ready, her husband had optimised for himself and not for the goal.


An IT operating model for the digital age

Consider a typical IT team – generally, all tech staff will sit in their own division, removed from the rest of the business because it is easier to track, manage and budget their work. What happens, then, if the head of customer experience has a request? It is unlikely that customer experience teams, which have different key performance indicators (KPIs), will have much interaction with IT. The result is two frustrated parties lacking a common language and unable to deliver innovation at the pace required by customers and the wider business.  The challenge is to reorganise team structures in a way that allows innovation to flourish. In the era of digital transformation 1.0, that meant a bolt-on or “bi-modal” approach to digital, essentially giving a dedicated team the resources and licence to operate at pace, while the rest of the business continued plodding along in a traditional environment. It is not a bad place to start to get digital initiatives prioritised, but the reality is that “digital” now impacts every transaction and every touchpoint.



Quote for the day:


"If you don't understand that you work for your mislabeled 'subordinates,' then you know nothing of leadership. You know only tyranny." -- Dee Hock


Daily Tech Digest - August 03, 2018

man on cliff network world logo edge computing circuitry city
Edge networking was only one of the areas of growing interest revealed in the study. Another hot technology is Intent-Based Networking (IBN) which basically employs automation, analytics, intelligent software and policies that let network administrators define what they want the network to do. Cisco, Juniper along with startups such as Apstra have made IBN technology a relatively new industry buzzword and the study proves that out: More than half of network professionals that we surveyed are familiar with intent-based networking (54%), and one-third of them work at companies with IT budgets of more than $1 billion. “It’s not surprising then that only 3% report adoption of an intent-based network and 8% are beginning to execute an intent-based networking strategy, including investing in SDN [software-defined networking], virtualization, machine learning, model-based APIs and security tools. A larger pool (38%) have not yet considered this strategy but plan to begin research in the next 12 months,” Network Worldwrote.



Ending the estrangement: Why the CIO and the CMO need to collaborate


Breaking down data silos and supporting a compelling customer experience are at the top of the list. Creating a single view of the customer and personalizing communications involves more than building out APIs to connect internal and external data sets. Customer identity management needs to be centralized, so that updates to one part of a profile are instantly and automatically reflected in all the databases in which that customer’s data is held. This is a major marketing requirement, but the execution is up to IT. Marketing also needs IT’s help to navigate new data privacy standards such as the EU’s General Data Protection Regulation (GDPR). ... It is up to IT to determine the right balance between personalization and privacy without compromising marketing’s effectiveness or breaking the law. Good models for this are the IT departments in highly regulated industries, such as financial services and healthcare, that have already succeeded at putting terabytes of customer data in the hands of their marketers, while remaining complaint with myriad regulations. In sum, marketing can no longer go it alone with its digital agenda.



Unit Testing With Mockito

TDD (Test Driven Development) is an effective way of developing the system by incrementally adding the code and writing tests. Unit tests can be written using the Mockito framework.  In this article, we explore a few areas of writing unit tests using the Mockito framework. We use Java for the code snippets here. Mocking is a way of producing dummy objects, operations, and results as if they were real scenarios. This means that it deals with no real database connections and no real server up and running. However, it mimics them so that the lines are also covered and expect the actual result. Thus, it can be compared with the expected result and asserted. Mockito is a framework that facilitates mocking in the tests. Mockito objects are kind of proxy objects that work on operations, servers, and database connections. We would be using dummy, fake, and stub wherever applicable for mocking. We will use JUnit with the Mockito framework for our unit tests.


Audi to test 5G use cases in car production


Audi and Ericsson believe that many potential characteristics of the emerging 5G standard – notably its ability to run faster, low-latency, high-capacity, highly secure mobile networks – lend themselves to complex, automated production environments such as a car factory. In Germany, the trend towards digitisation of industrial production is known as Industry 4.0, and it is a key government initiative spearheaded by the German Ministry of Education and Research (BMBF) and the Ministry for Economic Affairs and Energy (BMWI) under chancellor Merkel. The first phase of the collaboration will see Audi and Ericsson working together on a latency-critical application using wireless production robots equipped with a gluing application for bodywork construction. Eventually, the Audi lab – which, in recent years, has explored big data in supply chain logistics and augmented reality in engine assembly, among other things – will be equipped with a 5G-enabled, simulated production environment that mirrors Audi’s real-life production line in nearby Ingolstadt.


5 Google Assistant tasks that will make your work life easier

Google Assistant is, without question, the most powerful and user-friendly virtual assistant on the market. Powered by AI, Assistant can help you with so many things: from answering questions to scheduling to making reservations to getting the latest weather from Mars—and even helping you with your bedtime routine. Yet for most users, the depth and breadth of what Assistant can do goes largely untapped. Why? Because it can do so much. With that in mind, I thought I'd share with you five tasks that can make your busy life a bit easier. Of course everyone's idea of "easier" varies, so I'm going to attempt to make these as broad and universal as possible ... at least within the realm of IT. Keep in mind, this is about making your life a bit easier, not more productive. Whether you're shopping for a family, your department, a client, a job, or yourself, it can be a daunting task to keep track of what you need to purchase ... especially when you're on the go. Driving to a client? The last thing you need is to pull out that phone to remind yourself to pick up Cat5 cable. Instead, use Google Assistant.


PSD2: Blessing or Curse for Banks?


PSD2 is a new European regulation that forces all banks in the European Union to open up their systems to outside players. Banks have to offer 3 APIs free of charges to all 3rd parties approved by the ECB: Accounts, Transactions and Payments. By forcing banks to open up a number of their core systems, the ECB hopes to stimulate innovation into the financial industry through an open API ecosystem. PSD2 brings a lot of challenges and investments for banks without any financial compensation in return: they have to invest in an API gateway, API security, modernizing some of their core systems to expose APIs, etc. and they have to offer the same performance for their APIs as for their existing banking app and website. So, as bank, you can definitely consider PSD2 as a legal obligation similar to GDPR. On the other hand you can also see it as a first step in opening up your core systems and becoming a digital player. If you were planning to strive for an Open API business strategy, than the PSD2 investments are anyway necessary. PSD2 forces you to offer the 3 APIs for free, but PSD2 also doesn’t prevent you from monetizing other APIs


Your hacked devices are being used for cyber crime says FBI

"Devices in developed nations are particularly attractive targets because they allow access to many business websites that block traffic from suspicious or foreign IP addresses. Cyber actors use the compromised device's IP address to engage in intrusion activities, making it difficult to filter regular traffic from malicious traffic," said the alert. IoT devices make easy targets for attackers because many are still shipped with poor security, often enabling attackers to gain access with the use of default username and passwords, or by using brute force attacks to guess passwords - and that's if the devices even have authentication processes in the first place. When security loopholes are uncovered in IoT devices, some vendors will push out firmware and software updates in order to prevent vulnerabilities being exploited - but given how large numbers of smart devices are connected to the internet then forgotten about, it's not guaranteed that users will apply the patches required to protect them from attacks.


How to identify a high-performing tech job candidate: 5 traits

If companies want high-performance employees, they must foster an environment where those individuals can flourish. "Regardless of what business you're in, if you want to improve a team it's critical that employees are engaged," said Cameron Smith, senior global director at Genesys. "Gallup's 2017 State of the American Workplace report found, 'Business or work units that score in the top quartile of their organization in employee engagement have nearly double the odds of success when compared with those in the bottom quartile.'" To fundamentally improve a team, supervisors and business executives need to step up. Employees can't be engaged in a company that isn't worth engaging in. "Instead, it's more of a two-way street, with companies playing a large role in fostering talent," said Smith. "The competence of an employee's supervisor, making sure appropriate workloads are assigned, and company culture all play a role in keeping staff performing at a high-level."


Manage APIs with connectivity-led strategy to cure data access woes


Once DevOps teams deliver microservices and APIs, they see the value of breaking down other IT problems into smaller, bite-size chunks. For example, they get a lot of help with change management, because one code change does not impact a massive, monolithic application. The code change just impacts, say, a few services that rely on a piece of data or a capability in a system. APIs make applications more composable. If I have an application that's broken down into 20 APIs, for example, I can use any one of those APIs to fill a feature or a need in any other application without impacting each other. You remove the dependencies between other applications that talk to these APIs. Overall, a strong API strategy allows software development to move faster, because you don't build from the ground up each time. Also, when developers publish APIs, they create an interesting culture dynamic of self-service.


Apache OpenWhisk vulnerability targets IBM Cloud Functions

According to PureSec's tests, an intruder could exploit the vulnerability to insert malicious code with the same permissions as the serverless function it replaced. Specifically, a remote attacker could overwrite the source code of a vulnerable function executed in a runtime container and influence subsequent executions of the same function in the same container. The attacker could then extract confidential customer data, such as passwords or credit card numbers; modify or delete data; mine cryptocurrencies; and more, Segal said. Other OpenWhisk-based serverless platforms, such as Adobe I/O, were not impacted by the vulnerability because a provider may opt not to use the runtime images provided by Apache OpenWhisk, said Rodric Rabbah, co-creator of OpenWhisk and recent co-founder of CASM, a stealth startup focused on serverless computing. OpenWhisk accepts user functions and then dynamically injects that code into Docker container images; a vendor can provide its own images, for example to provide a runtime that contains libraries that are important for their organization.



Quote for the day:


"Without deviation from the norm, progress is not possible." -- Frank Zappa


Daily Tech Digest - August 02, 2018

How You Can Bridge the IT Training Gap

Image: Shutterstock
"Organizations must ensure they’re creating opportunities for staff to get to know the business beyond just their department," noted Timothy Wenhold, chief innovation Officer at Power Home Remodeling, a national home remodeling firm. "When we onboard new hires, we have them spend two weeks shadowing every department, regardless of their level and years of experience," he said. "This gives the staff the direction needed to align their technical training goals so that they match the business’ needs." Ideally, there should always be a mix of different types of training. "The organization may want to carry out some type of assessment prior to the training to understand what areas should be addressed over others," suggested Ben Jordan, a security specialist with cybersecurity firm GreyCastle Security. "After trainings are completed, employees should be given the opportunity to give feedback about the training." "Whether IT training happens in a classroom, on the computer, on the job, or on your own — internally, externally or a mixture of both — all this matters less than ensuring that training is a reoccurring program and not a one-time, easily forgotten session," commented Thomas LaMonte, a senior analyst with tech research firm Gartner Digital Markets.



Mexico's fintech industry is on fire

Passed in early March 2018, Mexico’s fintech law received support from every major party, passing with 75 percent of the votes. And though it did place some restrictions on the space, the law was overwhelmingly supportive of the industry as a whole. The law even provided a loose definition of digital assets: “"...the representation of value registered electronically and used by the public as a means of payment for all types of legal acts and whose transfer can only be carried out through electronic means." This is important because it opened the door for fintech companies to utilize cryptocurrencies in remittance transactions, a space which accounted for over $28 billion coming into the country, representing 10 percent of Mexico’s total GDP growth in 2017. Before, these payments carried significant fees and could take days to process, but with the new law, citizens can now access these services through fintech institutions utilizing cryptocurrencies at a lower rate and with much faster processing times.


Microsoft rejiggers Windows 10 Enterprise subscriptions, pricing

p1240491 19
Changes to Windows 10 Enterprise were spelled out in some detail, even though new pricing was not disclosed. "For Windows, we're taking steps to recalibrate the price and rename the per device/per user offers, optimizing on our strategy of Microsoft 365," Microsoft wrote in an FAQ. "Part of this is about clarity," said Wes Miller, an analyst with Kirkland, Wash.-based Directions on Microsoft, talking about licensing. But he also said the changes, both in pricing and nomenclature, are further efforts by Microsoft to move customers to the licensing model where rights are tied to users, not to devices. Server-based desktops, for example, are only possible under Microsoft's per-user licensing, Miller pointed out. Windows 10 Enterprise E3 and Windows 10 Enterprise E5 debuted in 2016, when Microsoft began selling subscriptions to the operating system, specifically Windows 10 Enterprise, the operating system's top-tier version. Unlike Microsoft's legacy licensing - in which the operating system is licensed on a per-device basis - the E3 and E5 subscriptions are per-user. A licensed user could work at any of five allowed devices equipped with Windows 10 Enterprise.


DNS: Strengthening the Weakest Link

New specifications were defined in 2005 to address DNS’s lack of security. DNS Security Extensions (DNSSEC3) provides origin authentication, data integrity and authenticated denial of existence. However, the specifications do not address availability or confidentiality. The main goal of DNSSEC was to preclude DNS spoofing or DNS cache poisoning. DNSSEC adoption remains a long-term challenge and implementation has been slow. According to ISOC4, only about 0.5% of zones in .com are signed. That’s because when compared to DNS, DNSSEC is complex, introduces computation and communication overhead to DNS and requires significant infrastructure changes for organizations. IT organizations should make DNS infrastructure protection top of mind due to the absence of built-in security mechanisms in the DNS protocol. Specifically, DNS security requires rethinking perimeter security. Many organizations address DNS security by provisioning a DNS firewall and/or competent DNS servers, leaving the perimeter unattended.


How to identify and protect high-value data in the enterprise


The definition of high-value data is not one size fits all, as we all define our data differently. When considering what high-value data is versus what is just regular data, it is important to take a step back and use a holistic, risk-based approach; classify your data based on what can impact you the least to the most. Consider adding a few flavors to your data classification formula, such as the value of the data, the consequences of the loss or exposure of that data, the likelihood of occurrences and risks to enhance your data classification, and also ensure that you are measuring and defining your data on a consistent basis. Using the above approach and examples, take a deep breath and two steps back. Close your eyes and list a few data assets around you. Classify them inside your head, spin it a few times and then write them down. Make sure you are not trying to capture all of the data at once, as doing so can be a dangerous move and will probably overheat your brain; limiting your scope is the key.


How GDPR Could Turn Privileged Insiders into Bribery Targets

GDPR mandates hefty penalties for companies that are breached. Penalties can reach as high as 4% of a violators' annual revenue. (Remember, Google and Facebook are already facing $9 billion in fines). This means that in many cases, penalties will far outweigh the actual cost of a breach, which criminals know. Rather than auction stolen data to fellow crooks for pennies or try and exact a ransom to unencrypt it, criminals will start to ransom stolen data back to the organizations they heist it from in exchange for not exposing it publicly. The extortion price will be substantially higher than what could be earned on the Dark Web but significantly lower than an actual GDPR breach fine. Paying extortion may create an ethical dilemma for companies, but it will make smart business sense as it will be much lower than financial penalties Privileged insiders are central to this scenario. Cybercriminals will be motivated to bribe them, as holders of the kingdom's keys, into giving up their credentials. Once criminals have hold of these, they will have an opportunity to earn payouts way beyond anything ever seen in the past.


Why innovation requires transformational leadership

We must continuously build and challenge our assumptions at the same time and let our direction and momentum be dictated by that process. One which is informed by what we know about today and as far as we can predict tomorrow. Unlike traditional ‘strategy’ that creates a much better readiness to change direction when required rather than clinging onto what worked a couple of years ago. That brings us on to another important element of transformational leadership and that is that the change line is not and never will be set in stone. When you think about it, that makes sense right? Your ambition for tomorrow is based on your knowledge and ability today. As your abilities grow and develop, as new technologies come on stream and as customer demands change it is only natural that your future ambitions will modify based on today’s scenario. Here again those leaders who seek to develop problem-solving flexibility within their organisations are the ones which are more likely to come out on top. And if you’re not going to be flexible, well then watch out for the 74% of leaders who research suggests are looking to be disruptors in their own sectors.


5 Artificial Intelligence Business Lessons From The Masters


Both IBM’s Dinesh Nirmal and O’Reilly’s Ben Lorica said preparing data for mathematical models was the primary bottleneck for AI. Nirmal's keynote focused on operationalizing AI. Nirmal described how real world machine learning reveals assumptions embedded in business processes and in the models themselves that cause expensive and time-consuming misunderstandings. Data hygiene has been a critical failure point that has thwarted analytics efforts since the dawn of time. However, it’s an even bigger issue as companies look to incorporate lots of data from various internal and third-party databases. IBM talked about the need for preparing data but also having a structure for AI model management. In a meeting with Ben Lorica, he noted there's a role within the AI/data science discipline called data engineer that assists in preparing data for the data scientists to use in the algorithm training process. Even in 2018, we're still trying to eliminate the garbage in yield garbage out problem.


Feds Announce Arrests of 3 'FIN7' Cybercrime Gang Members

"FIN7 is one of the most sophisticated and aggressive malware schemes in recent times, consisting of dozens of talented hackers located overseas," the Justice Department says in a fact sheet. The scale of FIN7's operations has been significant. In the U.S. alone, FIN7 allegedly stole "more than 15 million customer card records from over 6,500 individual point-of-sale terminals at more than 3,600 separate business locations," the Justice Department says. Many businesses have sought to better secure their payment card systems and networks in light of large intrusions in recent years affecting T.J. Maxx, Target, Home Depot and many others. But their efforts have not been fully effective. Indeed, the U.S. continues to suffer a payment card breach epidemic centered not just on restaurants, but also retailers and hotels. The problem is compounded by the ease of procuring card-scraping malware, designed to infect POS systems, as well as backdoor exploitation tools - such as the Carbanak backdoor - from underground cybercrime forums.


Preventing the next digital black swan: The auditor, the CISO and the C-Suite

On the surface, digital black swans may seem unforeseeable, but if you dig a little deeper, you’ll generally discover that many of these incidents could have been prevented. For instance, in the Equifax breach, hackers exploited a vulnerability that was publicly disclosed two months prior to the attack. If Equifax had installed the patch in a timely manner, this breach would likely have been prevented. The key to preventing digital black swans is carefully putting critical controls in place. There are a number of controls that companies can use to reduce the odds of experiencing a major cyberattack. For example, Equifax suffered from faulty vulnerability management. The credit reporting company had ample time to install a routine security update that would have prevented the cyber incident. Poor security practices at Equifax were systemic. Shortly after the breach, it was revealed that one of the company’s online employee portals could be accessed using the default credentials of “admin” as both the username and password. This simple negligence put millions of Americans’ data at great risk.



Quote for the day:


"Take time to deliberate; but when the time for action arrives, stop thinking and go in." -- Andrew Jackson


Daily Tech Digest - August 01, 2018

What is WebAssembly? The next-generation web platform explained
WebAssembly, developed by the W3C, is in the words of its creators a “compilation target.” Developers don’t write WebAssembly directly; they write in the language of their choice, which is then compiled into WebAssembly bytecode. The bytecode is then run on the client—typically in a web browser—where it’s translated into native machine code and executed at high speed. WebAssembly code is meant to be faster to load, parse, and execute than JavaScript. When WebAssembly is used by a web browser, there is still the overhead of downloading the WASM module and setting it up, but all other things being equal WebAssembly runs faster. WebAssembly also provides a sandboxed execution model, based on the same security models that exist for JavaScript now. Right now, running WebAssembly in web browsers is the most common use case, but WebAssembly is intended to be more than a web-based solution. Eventually, as the WebAssembly spec shapes up and more features land in it, it may become useful in mobile apps, desktop apps, servers, and other execution environments.



Improving Testability of Java Microservices with Container Orchestration & Service Mesh


This article shows how container orchestration provides an abstraction over service instances and facilitates in replacing them with mock instances. On top of that, service meshes enable us to re-route traffic and inject faulty responses or delays to verify our services' resiliency. We will use a coffee shop example application that is deployed to a container orchestration and service mesh cluster. We have chosen Kubernetes and Istio as example environment technology. Let’s assume that we want to test the application’s behavior without considering other, external services. The application runs in the same way and is configured in the same way as in production, so that later on we can be sure that it will behave in exactly the same way. Our test cases will connect to the application by using its well-defined communication interfaces. External services, however, should not be part of the test scenario. In general, test cases should focus on a single object-under-test and mask out all the rest. Therefore, we substitute the external services with mock servers.


Three steps to improve data fidelity in enterprises


Data fidelity requires the contextual evaluation of data in terms of security. This means examining the data objects within the context of the environment in which they were created. In order to gather this data, you must not only re-examine what you deem important, but you must do so within the context of the tasks that you are attempting to support. The task support piece is critical because this bounds the problem space in which you can work. If the problem space is not bounded, all of the solutions will remain brittle point solutions that continue to fail when new problems are introduced. The ways systems can fail seems endless, but the ways systems can perform correctly are limited. This characteristic is key in any analysis that requires accurate predictions. Coincidentally, this same characteristic is oftentimes overlooked when attempting to accurately predict outcomes in the cyber domain. Three disciplines can assist in creating the boundaries and gathering the contextual data required to ensure data fidelity: dependency modeling, resiliency and reliability.


Key steps to success with self-service analytics

Gartner predicts that by 2020 the number of data and analytics experts in business units will grow three times the rate of those in IT units. With that in mind, isn’t creating a culture that values data absolute imperative? Creating a community of practice (COP) is not as simple as ‘training’ often sounds. Like Agile methods can quickly turn ‘tragile’ or ‘fragile’ if the team isn’t bought into the approach, self-service will fail if there isn’t a data driven culture that champions best practices. A COP uses training to first promote consumption for the business, and second build SMEs who will champion best practices for future builds. All areas of the enterprise are involved in creating this community: technical SMEs, novice developers and business consumers all interact during technical and tool agnostic sessions. To further growth and development across varying BI maturity, smaller break-out sessions are used to connect business units with similar use cases or audiences, so they can work together on their BI solutions. By creating a community of practice, you are fostering a culture that understands BI best practices and is encouraged to hone and develop new skills.


Which two companies will lead the enterprise Internet of Things?

Which two companies will lead the enterprise Internet of Things?
The biggest opportunities, the survey said, were in platforms supporting manufacturing and service applications. These enterprise IoT platforms, according to data and analytics firm GlobalData, “have become important enablers across a wide swathe of enterprise and industrial operations” by helping businesses become more productive, streamline their operations, and gain incremental revenues by connecting their devices and products to IoT devices sensors that collect a wide variety of environmental, usage, and performance data. The platforms are designed to help businesses collect, filter, and analyze data in a variety of applications that can help organizations make data-driven business, technology, and operational decisions. But which eIoT platforms are best positioned for to lead the “dynamic and highly competitive eIoT market? To find out, U.K.-based GlobalData conducted a “comprehensive analysis … with profiles, rankings, and comparisons of 11 of the top global platforms,” including Amazon, Cisco, GE, Google, HPE, Huawei, IBM, Microsoft, Oracle, PTC, and SAP.


AI can deliver 'faster better cheaper' cybersecurity

"We need to be able to make good cybersecurity services accessible to small and medium businesses, and consumers, and so we see a great opportunity in that regard," Ractliffe said. "Bluntly, we can see 'better faster cheaper' means of delivering cybersecurity through artificial intelligence and automation." Australia's defence scientists are also turning to AI techniques in the military's increasingly complex networked environment. "When we look at a system like a warship, it is now completely networked ... so that in itself creates a vulnerability," said Australia's Chief Defence Scientist Dr Alex Zelinsky at the Defence Science and Technology Group (DSTG). The internet is a "best effort" network. Malicious actors can slow down network traffic, or even divert it to where it can be monitored. This can happen in real time, and the challenge is how to detect that, and respond as quickly as possible. "I think that's where the AI elements come in," Zelinsky said. But one of the challenges of using AI in a protective system, or in the potential offensive systems that Zelinsky hinted that DSTG is working on, is explainability.


Digital trust: Security pros, business execs and consumers see it differently

digital trust today
“We are at a crossroads in the information age as more companies are being pulled into the spotlight for failing to protect the data they hold, so with this research, we sought to understand how consumers feel about putting data in organizations’ hands and how those organizations view their duty of care to protect that data,” said Jarad Carleton, industry principal, Cybersecurity at Frost & Sullivan. “What the survey found is that there is certainly a price to pay – whether you’re a consumer or you run a business that handles consumer data – when it comes to maintaining data privacy. Respect for consumer privacy must become an ethical pillar for any business that collects user data.” Responses to the survey showed that the Digital Trust Index for 2018 is 61 points out of 100, a score that indicates flagging faith from consumers surveyed in the ability or desire of organizations to fully protect user data. The index was calculated based on a number of different metrics that measure key factors around the concept of digital trust, including how willing consumers are to share personal data with organizations and how well they think organizations protect that data.


Disruption: The True Cost of an Industrial Cyber Security Incident

Disruption: The True Cost of an Industrial Cyber Security Incident
The IoT threat facing industrial control systems is expected to get worse. In late 2016, Gartner estimated that there would be 8.4 billion connected things worldwide in 2017. The global research company said there could be approximately 20.5 billion web-enabled devices by 2020. An increase of this magnitude would give attackers plenty of new opportunities to leverage vulnerable IoT devices against industrial control systems. Concern over flawed IoT devices is justified. Attackers can misuse those assets to target industrial environments, disrupt critical infrastructure and jeopardize public safety. Those threats notwithstanding, many professionals don’t feel that the digital threats confronting industrial control systems are significant. Others are overconfident in their abilities to spot a threat. For instance, Tripwire found in its 2016 Breach Detection Study that 60 percent of energy professionals were unsure how long it would take automated tools to discover configuration changes in their organizations’ endpoints or for vulnerability scanning systems to generate an alert.


How to evolve architecture with a reactive programming model


At the top level, the reactive model demands that enterprise architects think in terms of steps rather than flows. Each step is a task that is performed by a worker, an application component or a pairing of the two. Steps are invoked by a message and generate one or more responses. For example, a customer number has to be validated, meaning it's associated with an active account. This step might be a part of a customer order, an inquiry, a shipment or a payment. Historically, enterprise architects might consider this sequence to be a part of each of the application flows cited above. In the reactive programming model, it's essential to break out and identify the steps. Only after that should architects compose them into higher-level processes. It's difficult to work with line organizations to define steps because they tend to think more in terms of workers and roles, which dictated the flow models of the past. If you're dealing with strict, top-down EA, you'd derive steps by looking at the functional components of the traditional tasks, such as answering customer inquiries. 


How Contract Tests Improve the Quality of Your Distributed Systems


In order to fail fast and start getting immediate feedback from our application, we do test driven development and start with unit tests. That’s the best way to start sketching the architecture we’d like to achieve. We can test functionalities in isolation and get immediate response from those fragments. With unit tests, it’s much easier and faster to figure out the reason for a particular bug or malfunctioning. Are unit tests enough? Not really since nothing works in isolation. We need to integrate the unit-tested components and verify if they can work properly together. A good example is to assert whether a Spring context can be properly started and all required beans got registered. Let’s come back to the main problem – integration tests of communication between client and a server. Are we bound to use hand written HTTP / messaging stubs and coordinate any changes with their producers? Or are there better ways to solve this problem. Let’s take a look at a contract test and how they can help us.



Quote for the day:


"If you don’t like the road you’re walking, start paving another one." -- Dolly Parton