Daily Tech Digest - June 27, 2018

The Future of Decentralization


Just as alternative, renewable, natural and sustainable energy can contribute to a cleaner environment, so to the infrastructure of politics, economics and the motives of the web need to be remade with more intention as to truly allow technology to serve humanity and not the other way around. This will take time, likely decades if not longer. It will take the help of whatever artificial intelligence becomes. ... Decentralization requires a human-AI hybrid civilization. It requires automation, robots, new kinds of jobs, roles and values not yet invented or even implemented or imagined. Radical decentralization also requires leadership, humans with a different consciousness, that’s more inclusive, more augmented with data, more wise — than any human leaders in history. The future of decentralization is global, it’s not something Scandinavia or China arrives at before others, though certainly they may embody some elements of it before. A decentralized internet, like solutions on the blockchain will required upgrades, better iterations, improved software, smarter smart contracts, quantum infrastructures, AIs that police them to make sure they are what they were intended to be.


IT chiefs keep obsolete systems running just to keep data accessible

OPIS
One of the chief problems of keeping the aging systems running is related to security, as the research highlights. 87 per cent of the IT decision makers in the survey sample agree that legacy applications on older operating systems are more vulnerable to security threats. At the same time, 82 per cent recognize that old or legacy systems are rarely compatible with modern security and authentication methods. “On older systems some security vulnerabilities are harder – or even impossible – to resolve. If available at all, patches for new threats could be delayed because legacy apps are considered less of a priority,” says Jim Allum. “As legacy applications pre-date the latest security innovations there is a clear security risk to having a lot of legacy within your application portfolio.” A related issue is compliance, with 84 per cent of the sample agreeing that on old/legacy applications it is harder to accurately track and control access to sensitive data in line with stricter data privacy regulations such as the GDPR.



Global IoT security standard remains elusive


Despite the cacophony of approaches towards IoT security, Kolkman noted that most are underpinned by common IT security principles. “If you look at the different IoT security frameworks, there seems to be consensus on things like upgradability and data stewardship – even if there’s no global standard that describes it all,” he said. These principles are reflected in a set of enterprise IoT security recommendations released by the Internet Society this week. Among them is the need for companies to closely follow the lifecycle of IoT devices, which should be decommissioned once they are no longer updatable or secure. Meanwhile, the Internet Society’s Internet Engineering Task Force is also working on IoT standards in areas including authentication and authorisation, cryptography for IoT use cases and device lifecycle management. With cyber security at the top of most national security agendas today, Kolkman said the Internet Society has reached out to policy makers to provide recommendations about what they can do, such as setting minimum standards of IoT security and accountability.


A CIO on Carrying the Burden of Medical Device Cybersecurity

The situation "has created significant challenges, because ... those devices sit in our networks and infrastructures from the technology side, and we're now held responsible to remediate those issues," says Earle, who is also chair of the College of Healthcare Information Management Executives - or CHIME - board of trustees. "Many of those devices are very proprietary and it's very difficult to manage them because you would need to put in some kind of solution that ... monitors devices - and the proprietary nature of those devices makes that very challenging to do," he says in an interview with Information Security Media Group. "It's a lack of standards as well as a lack of characterization of those standards that makes this challenging. There's no true vulnerability disclosure associated with these devices. Suppliers should provide documentation of the vulnerabilities of their products like they would normally do for anything else in a situation like that. We need to ask for greater risk sharing."


Know your enemy: Understanding insider attacks


When an enterprise establishes an insider threat program, executives need to be aware of the potential negative effects this can have on employee morale and sensitivity to loss of privacy. Implementing an insider threat program mandates increased communication with the staff to explain the program, explain how they can help and offer frequent emphasis on program wins. The 2016 Ponemon Institute report "Tone at the Top and Third Party Risk," noted that "If management is committed to a culture and environment that embraces honesty, integrity and ethics, employees are more likely to uphold those same values. As a result, such risks as insider negligence and third party risk are minimized." An insider threat program should also include a steering board/committee. Ideally, such a committee should include representatives from law, intellectual property, the office of internal governance, global privacy, human resources, information technology, corporate communications and security.


The future of consumer MDM: Cloud, referential matching and automation

Current MDM technologies typically use “probabilistic” and “deterministic” matching algorithms to match and link consumer records across an enterprise and to ensure there is only one master record for each consumer. These algorithms match records by comparing the demographic data contained in those records—data such as names, addresses, and birthdates. But demographic data is notoriously error-prone, frequently incomplete and constantly falling out of date. And probabilistic and deterministic matching algorithms are only as accurate as the underlying demographic data they are comparing, meaning they are fundamentally limited in how accurate they can be by the availability and quality of the data. But there is a new paradigm in identity matching technology called “referential matching” that is not subject to these same fundamental limits. Rather than directly comparing the demographic data of two consumer records to see if they match, referential matching technologies instead compare the demographic data from those records to a comprehensive and continuously-updated reference database of identities.


The ML.Net project version 0.2 is available for .Net Core 2.0 and .Net Standard 2.0 with support for x64architecture only (Any CPU will not compile right now). It should, thus, be applicable in any framework where .Net Standard 2.0 (eg.: .Net Framework 4.6.1) is applicable. The project is currently on review. APIs may change in the future. Learning the basics of machine learning has not not been easy, if you want to use an object oriented language like C# or VB.Net. Because most of the time you have to learn Python, before anything else, and then you have to find tutorials with sample data that can teach you more. Even looking at object oriented projects like [1] Accord.Net, Tensor.Flow, or CNTK is not easy because each of them comes with their own API, way of implementing same things differently, and so on. I was thrilled by the presentations at Build 2018 [2] because they indicated that we can use a generic work-flow approach that allows us to evaluate the subject with local data, local .Net programs, local models, and results, without having to use a service or another programming language like Python.


Could blockchain be the missing link in electronic voting?

To bolster the security, accuracy and efficiency of elections, some suggest the implementation of blockchain technology. Blockchain is a decentralised, distributed, electronic ledger used to record transactions in such a way that transactions made using it can't be subsequently altered without the agreement of all parties. Thousands of network nodes are needed to reach consensus on the order of ledger entries. Most famously, blockchain is used for bitcoin transactions, but it's finding use cases in everything from storing medical records to authenticating physical transactions. Such is the level of interest in blockchain technology that governments are even examining its potential use cases. Blockchain-enabled elections have already taken place: In March, Sierra Leone voted in its presidential elections and votes in the West Districts were registered on a blockchain ledger by Swiss-based firm Agora. By storing the data in this way, election data was "third-party verifiable and protected against any possibility of tampering," the company said, with the results publicly available to view.


Secure by Default Is Not What You Think

Once a product is built to be secure by default, it still needs to remain that way once deployed in its environment, which is increasingly complex and interconnected. That’s why the first responder — the person installing the product, application, or database — is evermore important. To keep the organization and users safe, the first responder needs to apply general principles, such as configuring controls to be secure as possible, enabling encryption at rest and SSL/TLS secure communication channels, restricting access to applications or data only to those people who need it, and requiring authentication that relies on trusted identity sources. Certificate or key-based authentication also are considerations. General principles can guide administrators, yet one size does not fit all. Administrators also have to tailor approaches to specific environments. What banks need from their databases, applications, and other technologies, for instance, is different from what oil companies or intelligence agencies need. Whatever the industry, someone needs to watch the whole picture. For instance, a database sits between an application above it and an operating system below it.


Underground vendors can reliably obtain code signing certificates from CAs

code signing certificates
The researchers where also surprised to find that all vendors opt for selling the anonymous code signing certificates to the malware developers instead of providing a signing service for a fee. “All vendors claim that their certificates are freshly issued, that is, that they have not been stolen in any way but obtained directly from a CA. Further, all vendors claimed that they sell one certificate into the hands of just one client, and some even offered free or cheap one-time reissue if the certificate was blacklisted too soon. Vendors did not appear to be concerned with revocation, often stating that it usually ‘takes ages’ until a CA revokes an abused certificate,” they shared. “Some vendors even claim to obtain the certificate on demand, having the certificate issued once a customer pays half of the price. Interestingly, [one vendor] even claims that he always has a few publisher identities prepared and the customer can then choose which of these publisher names he wants to have his certificate issued on.”



Quote for the day:


"The highest reward for a man's toil is not what he gets for it but what he becomes by it." -- John Rushkin


Daily Tech Digest - June 26, 2018

Is It Time To Say Goodbye To Email?


Nearly all organizations that try to get rid of email start internally by switching to a cloud-based collaborative system that allows employees to chat, correspond and work together virtually. Some companies have even resorted to an automatic response when an internal email is sent reminding the sender that the email won’t be responded to and that they need to use the collaboration software instead. ... The happy medium between getting rid of email completely and keeping it as is is to modify it in someway. If the future of work relies on new technology and collaboration, it makes sense to imagine the next generation of email serving a similar purpose to pagers in the 1990s. If someone posts on the office’s collaborative system, sends a calendar invite, or tags you in a post, you could get an alert in your email that directs you to the correct system for the information. In the forward-thinking view of email, the purpose is to notify and direct instead of provide all the information. This system would seem to work better internally, but could also have success across organizations.



North American, UK, Asian regulators press EU on data privacy exemption

It also narrows an exemption for cross-border personal data transfers made in the “public interest” by imposing new conditions, including extra privacy safeguards, on its use, said the officials and legal experts. Under the previous law, regulators used the exemption to share vital information, such as bank and trading account data, to advance probes into a range of misconduct. For now, regulators are operating on the basis they can continue sharing such data under the new exemption but say doing so takes them into legally ambiguous territory because the new law’s language leaves room for interpretation. They fear that without explicit guidance, investigations such as current U.S. probes into cryptocurrency fraud and market manipulation in which many actors are based overseas, could be at risk. This is because in the absence of an exemption, cross-border information sharing could be challenged on the grounds that some countries’ privacy safeguards fall short of those now offered by the EU.


5 reasons the IoT needs its own networks

5 reasons the IoT needs its own networks
Despite having to be built from scratch, these new IoT networks can offer much less expensive service. T-Mobile, for example, offers a $6-a-year rate plan for machines on its new NB-IoT network. The company claims that’s 1/10 the cost of a similar Verizon service plan, but even $60 a year is far less expensive than a standard cellular connection. Just as important, the low-power devices that use these networks are much less expensive than standard LTE devices like mobile phones. As AT&T put it in a press release last year, "We can now reach new places and connect new things at a price that's more affordable than ever before.” ... Efficient use of scarce, expensive radio-frequency spectrum is the third reason dedicated IoT networks make sense. Both NB-IoT and LTE-M can be deployed in a very small slice spectrum channel compared to 4G deployments. NB-IoT can even be deployed in so-called LTE spectrum "guard bands" that sit between LTE channels to prevent interference. That means NB-IoT communications do not share spectrum resources with standard smartphone traffic, for example.


Tales from the Crypt(ography) Lab with Dr. Kristin Lauter

So that might sound like it’s in the stone age when we think of how fast technology evolves these days. But typically, for public key crypto systems over the last 40, 50 years, there have been roughly at least a 10-year time lag before crypto technologies get adopted. And that’s because the community needs to have time to think about how hard these problems are, and to set the security levels appropriately, and to standardize the technologies. So, we’re just getting to that point now where, kind of, almost 10 years after the first solutions were introduced, we’ve put together a broad coalition of researchers in industry, government and academia, to come together to try to standardize this technology. And we’re having our second workshop in March at MIT, where we’re going to try to get broad approval for our standard document, which recommends security parameters. So that’s the first challenge, is getting everyone to agree on what is the strength of these systems, kind of, essentially, how hard are these mathematical problems underneath, and then we plan to continue to build on that with this community, to get agreement on a common set of APIs


Ethical Data Science Is Good Data Science


When you work with 3rd parties, where your data is “better together,” should you share it all? No. This means enforcing fine-grained controls on your data. Not just coarse-grained role-based access control (RBAC), but down to the column and row level of your data, based on user attributes and purpose (more on that below). You need to employ techniques such as column masking, row redaction, limiting to an appropriate percentage of the data, and even better, differential privacy to ensure data anonymization. In almost all cases, your data scientists will thank you for it. It provides accelerated, compliant access to data and with that a great deal of comfort, freedom, and collaboration that comes when everyone knows they are compliant in what they are doing and can share work more freely. This freedom to access and share data comes when data controls are enforced at the data layer consistently and dynamically across all users. It provides the strong foundation needed to enable a high performing data science team.


Function Platforms with Chad Arimura and Matt Stephenson

“Serverless” is a word used to describe functions that get deployed and run without the developer having to manage the infrastructure explicitly. Instead of creating a server, installing the dependencies, and executing your code, the developer just provides the code to the serverless API, and the serverless system takes care of the server creation, the installation, and the execution. Serverless was first offered with the AWS Lambda service, but has since been offered by other cloud providers. There have also been numerous open source serverless systems. On SE Daily, we have done episodes about OpenWhisk, Fission, and Kubeless. All of these are built on the Kubernetes container management system. Kubernetes is an open-source tool used to build and manage infrastructure, so it is a useful building block for higher level systems.


Serverless cloud computing is the next big thing

Serverless cloud computing is the next big thing
Serverless computing in the cloud is a good idea—serverless computing is not just for the datacenter. Serverless cloud computing means the ability to get out of the business of provisioning cloud-based servers, such as storage and compute, to support your workloads, and instead use autiation at the cloud provider to allocate and deallocate resources automatically. ... We’re witnessing a reengineering of public cloud services to use a serverless approach. First, we’re seeing resource-intensive services such as compute, storage, and databases, but you can count on the higher-end cloud services being added to the list over time, including machine learning and analytics. What this all means for the enterprise is that less work will be needed to figure out how to size workloads. This serverless trend should also provide better utilization and efficiency, which should lower costs over time. Still, be careful: I’ve seen the use of serverless computing lead to higher costs in some instances. So be sure to monitor closely. There is clearly a need for serverless cloud computing.


Latin American banks advance in digital transformation projects

In terms of consumer technology trends, mobile banking in the region has surpassed both online banking and traditional channels and has become the number one channel for banks today, the report says. Regional Internet banking client uptake is at 67 percent, compared to 79 percent in 2015, while mobile applications rose to 33 percent. Millennials appear to be the most important target segment for digital banking, followed by premium clients and "native", digital customers, according to the study. When it comes to enterprise technology trends, the report notes that over 60 percent of Latin American banks are implementing or testing cloud computing, chatbots and Big Data, while a minority (less than 22 percent) mentions Blockchain, Internet of Things and virtual reality. Some 13 percent of the banks surveyed mentioned they have plans to invest in a new core banking platform in the next year while 7 percent are updating their core system. While 70 percent of those polled consider other banks with better digital capabilities as the main threat, nine out of 10 banks surveyed consider fintechs as potential partners or acquisitions.


Why Intel won't patch TLBleed vulnerability, despite serious concerns for cloud users

lockcyber.jpg
Maybe Intel has solutions with less overhead. But Intel excluded us from conversation so we don't know what those solutions might be. So we follow a pattern of immediately releasing a rough solution, which we can retract if a cheaper solution becomes published." Intel's position on this is somewhat peculiar, as the company has indicated that existing mitigations are sufficient to prevent this issue, and has declined to request a CVE to identify the flaw, as is standard. The Register report also indicates that Intel has declined to pay a bug bounty for this discovery via HackerOne, which is within the scope of the requirements Intel lists as being a side-channel attack, which Gras indicated to The Register as "goalpost-moving." Exploitation of, and patches for, TLBleed are likely to be more technically involved than the OpenBSD strategy of disabling SMT entirely, as ensuring that schedulers do not place processes of different security levels in the same core is a significant undertaking. 


The C4 Model for Software Architecture


Ambiguous software architecture diagrams lead to misunderstanding, which can slow a good team down. In our industry, we really should be striving to create better software architecture diagrams. After years of building software myself and working with teams around the world, I've created something I call the "C4 model". C4 stands for context, containers, components, and code — a set of hierarchical diagrams that you can use to describe your software architecture at different zoom levels, each useful for different audiences. Think of it as Google Maps for your code. ... Level 2, a container diagram, zooms into the software system, and shows the containers (applications, data stores, microservices, etc.) that make up that software system. Technology decisions are also a key part of this diagram. Below is a sample container diagram for the Internet banking system. It shows that the Internet banking system (the dashed box) is made up of five containers: a server-side web application, a client-side single-page application, a mobile app, a server-side API application, and a database.



Quote for the day:


"When you practice leadership,The evidence of quality of your leadership, is known from the type of leaders that emerge out of your leadership" -- Sujit Lalwani


Daily Tech Digest - June 25, 2018

What Is A Zero-Day Exloit? A Powerful But Fragile Weapon

skull and crossbones in binary code
A zero-day is a security flaw that has not yet been patched by the vendor and can be exploited and turned into a powerful but fragile weapon. Governments discover, purchase, and use zero-days for military, intelligence and law enforcement purposes — a controversial practice, as it leaves society defenseless against other attackers who discover the same vulnerability. Zero-days command high prices on the black market, but bug bounties aim to encourage discovery and reporting of security flaws to the vendor. The patching crisis means zero-days are becoming less important, and so-called 0ld-days become almost as effective. A zero-day gets its name from the number of days that a patch has existed for the flaw: zero. Once the vendor announces a security patch, the bug is no longer a zero-day (or "oh-day" as the cool kids like to say). After that the security flaw joins the ranks of endless legions of patchable but unpatched old days. In the past, say ten years ago, a single zero-day might have been enough for remote pwnage. This made discovery and possession of any given zero-day extremely powerful.



Address network scalability requirements when selecting SD-WAN


Calculating scalability based on the number of sites can be trickier. Not only do scalability requirements include provisioning sufficient bandwidth for all your sites, but network architecture matters when considering the scale needed to support a large number of branches. Some SD-WAN offerings are designed to spin up a virtual pipe from every site to every other site and maintain it perpetually. That option puts a large burden of VPN management on the service, which grows exponentially with the number of sites. Other SD-WAN services may also depend on VPNs, but without the need to have each VPN on constantly. For example, the service might allow customers to precalculate some of the necessary operating parameters for the VPNs and instantiate them only when needed for a network session. This option can support far more nodes than the previous design. Still, other SD-WAN products take a different approach entirely, without big VPN meshes. These employ architectures where the work of supporting the N+1st site is the same as the work of supporting the second site. This design could support even more nodes.


Ex-Treasury Official Touts the Promise of Fintech ‘Sandboxes’

As it stands now, “there’s nothing that calls itself a sandbox” in the U.S., Crane said. But comments by a Treasury official on Thursday at a SIFMA event about an upcoming report by Treasury on such reg tools signals promise of movement. What exactly is a “regulatory sandbox”? As the RegTechLab report explains, it’s a new tool allowing companies “to test new products, services, delivery channels or business models in a live environment, subject to appropriate conditions and safeguards.” Regulators, the report continues, “have also taken other proactive steps to engage with industry directly, and in some cases pursued mechanisms less formal than sandboxes to facilitate testing or piloting of new innovations.” Craig Phillips, counselor to the Treasury secretary, weighed in on Thursday that the financial services landscape “has over 3,300 new fintech companies” and “over 20% of all personal loans” originate in the fintech marketplace. “We need a new approach by regulators that permits experimentation for services and processes,” said Phillips, adding that it could include regulatory sandboxes, aka innovation facilitators.


Adapting to the rise of the holistic application


A shift in mindset is needed. McFadin says it is much harder to call a project “done”, as each element can be changed or updated at any time. While the services can be more flexible, it is necessary to think differently about the role of software developers. Companies that have implemented agile development properly should be equipped to manage this change more effectively. However, those that just namecheck agile, or don’t engage in the process fully, may well struggle. Eric Mizell, vice-president of solution engineering at software analytics company OverOps, claims the new composable, containerised, compartmentalised world of software is creating headaches for those tasked with the challenge of maintaining the reliability of these complex applications. “Even within the context of monolithic applications, our dependence on 30-year old technology, such as logging frameworks to identify functional issues in production code, is sub-standard at best – within the context of microservices and holistic applications, it is nearly worthless,” says Mizell.


Blockchain Watchers Say Decentralized Apps Are Around The Corner

More than a decade ago, Apple had to deal with that perennial chicken-and-egg problem: finding killer apps that made people to want to buy an iPhone. Developers building apps on blockchain technology face the same dilemma. Not enough people are using browsers and tokens that run on a blockchain network, so it’s hard to amass the number of users needed to propel a new app to success. But that hasn’t stopped people from trying or researchers from divining that decentralized apps, or “dapps,” really are just around the corner. One recent report from Juniper Research, a market intelligence firm in the U.K., states that in the coming year we’ll see a “significant expansion” in the deployment of dapps built on blockchain technology. Regular iPhone and Android users should be able to download a dapp on their smartphone “by the end of the year,” Juniper's head of forecasting, Windsor Holden toldForbes, adding that the dapps most likely to first gain mass adoption would deal with verifying identity or tracking the provenance of products or food in the supply chain.


IoT could be the killer app for blockchain

abstract blockchain representation of blocks and nodes
The rise of edge computing is critical in scaling up tech deployments, owing to reduced bandwidth requirements, faster application response times and improvements in data security, according to Juniper research. Blockchain experts from IEEE believe that when blockchain and IoT are combined they can actually transform vertical industries. While financial services and insurance companies are currently at the forefront of blockchain development and deployment, transportation, government and utilities sectors are now engaging more due to the heavy focus on process efficiency, supply chain and logistics opportunities, said David Furlonger, a Gartner vice president and research fellow. For example, pharaceuticals are required by law to be shipped and stored in temperature-controlled conditions, and data about that process is required for regulatory compliance. The process for tracking drug shipments, however, is highly fragmented. Many pharmaceutical companies pay supply chain aggregators to collect the data along the journey to meet the regulatory standards.


Serverless Native Java Functions using GraalVM and Fn Project

The Fn Project is an open-source container-native serverless platform that you can run anywhere — any cloud or on-premise. It’s easy to use, supports every programming language, and is extensible and performant. It is an evolution of the IronFunctions project from iron.io and is mainly maintained by Oracle. So you can expect enterprise grade solution, like first class support for building and testing. It basically leverages of the container technology to run and you can get started very quickly, the only prerequisite is Docker installed. ... Java is often blamed as being heavy and not suitable for running as serverless function. That fame does not come from nothing, it sometimes needs a full JRE to run with slow startup time and high memory consumption compared to other native executables like Go. Fortunately this isn't true anymore, with new versions of Java you can create modular applications, compile Ahead-of-Time and have new and improved garbage collectors using both OpenJDK and OpenJ9 implementations. GraalVM is new flavor that delivers a JVM that supports multiple languages and compilation into native executable or shared library.


Data Science for Startups: Deep Learning


Deep learning provides an elegant solution to handling these types of problems, where instead of writing a custom likelihood function and optimizer, you can explore different built-in and custom loss functions that can be used with the different optimizers provided. This post will show how to write custom loss functions in Python when using Keras, and show how using different approaches can be beneficial for different types of data sets. I’ll first present a classification example using Keras, and then show how to use custom loss functions for regression. The image below is a preview of what I’ll cover in this post. It shows the training history of four different Keras models trained on the Boston housing prices data set. Each of the models use different loss functions, but are evaluated on the same performance metric, mean absolute error. For the original data set, the custom loss functions do not improve the performance of the model, but on a modified data set, the results are more promising.


REST API Error Handling — Problem Details Response

RFC 7807 defines a "problem detail" as a way to carry machine-readable details of errors in an HTTP response to avoid the need to define new error response formats for HTTP APIs. By providing more specific machine-readable messages with an error response, the API clients can react to errors more effectively, and eventually, it makes the API services much more reliable from the REST API testing perspective and the clients as well. In general, the goal of error responses is to create a source of information to not only inform the user of a problem but of the solution to that problem as well. Simply stating a problem does nothing to fix it — and the same is true of API failures. RFC 7807 provides a standard format for returning problem details from HTTP APIs. ... The advantages of using this can be a unification of the interfaces, making the APIs easier to build, test and maintain. Also, I think that more advantages will come in the future when more and more API providers will adjust to this standard.


Protecting IoT components from being physically compromised


Disruption of these industrial devices can cause catastrophic events in an international scale, hence the importance to implement security solutions in front of a variety of attack vectors. The sole purpose is to prevent the intrusion of unauthorized (external or internal) actors and avoid disruption of critical control processes. This is not a theory but rather a disturbing fact. In 2017, a group of researchers from Georgia Tech developed a worm named "LogicLocker" that caused several PLC models to transmit incorrect data to the systems they control and as a result led to harmful implications. The common security methods of industrial networks are based mainly on the integration of dedicated network devices which are connected to the traffic artery at central junctions (usually next to network switches). This security method sniffs the data flow between the PLCs themselves, between the PLCs and the cloud (public or private) and between the user interface (HMI) and the cloud.



Quote for the day:


"Always and never are two words you should always remember never to use." -- Wendell Johnson


Daily Tech Digest - June 24, 2018

Walking With AI: How to Spot, Store and Clean the Data You Need

Walking With AI: How to Spot, Store and Clean the Data You Need
Machine learning initiatives are as diverse as companies themselves. Think critically about what sort of examples you need to train your algorithm on in order for it to make predictions or recommendations. For example, an online baby registry we partnered with wanted to project the lifetime value of customers within days of signup. Fortunately for us, it had proactively logged transaction data, including items customers added to their registries, where they were added and when they purchased. Furthermore, the client had logged the entire event stream, rather than just the current state of each registry, to maintain a database record. The client also brought us web and mobile event stream data. Through Heap Analytics, it had logged the type of device and browser used by each registrant into its transactional database. Using UTM codes, the registry company had even gathered attribution data, something collected for all or most marketing activities by just 51 percent of North American respondents to a 2017 AdRoll survey.



The SOA Journey: From Understanding Business to Agile Architecture


If the monolith ceased to implement its responsibilities in such a way that it satisfies business, if the development pace slows down, then something definitely needs to be done to fix this. But before that, apparently, you need to find a reason why is that so. In my experience, the reason is always the same: tight coupling and low cohesion. If your system belongs to a single bounded context, if it’s not big enough (yeah, sounds ambiguous, I’ll elaborate on this later) then all you have to do to fix things up is to decompose your system into modules the right way. Otherwise, you need to introduce way more autonomous and isolated concept that I can call a service. This term is probably one of the most overloaded one in the whole software industry, so let me clarify what I mean. I’ll give more strict definition further, but for now, I want to point out that, first of all, service has logical boundaries, not physical. It can contain any number of physical servers which can contain both the backend code and UI data. There can be any number of databases inside those services, and they all can have different schemas.


The Convergence of Digitalization and Sustainability

The promise of digitalization — big data, artificial intelligence, the internet of things, cybersecurity, and more — is often described with hyperbole. Pundits and academics alike have described “big data” as the “new oil,” “the new soil,” and the primary driver of a “management revolution,” the “Fourth Industrial Revolution,” and a “second machine age.” Artificial intelligence is receiving similar hype, with AI being compared to the rise of electricity during the Industrial Revolution. Russian President Vladimir Putin says whatever country controls AI will become the “ruler of the world.” What’s more, renowned scientist Stephen Hawking warns that development of full AI could spell the end of the human race.” There is similar hype around sustainability, albeit of a different flavor. “Sustainability is the primary moral and economic imperative of the 21st century,” says Mervyn King, former governor of the Bank of England. “It is one of the most important sources of both opportunities and risks for businesses. Nature, society, and business are interconnected in complex ways that should be understood by decision-makers.”


Differentiation through innovation: Banks pick fintech firms over bigtech

fintech
Big tech companies are seeing greater competition from fintech companies when it comes to providing banking solutions, say experts. "Businesses have started using Fintechs to solve many of the pain points in the banking value chain by doing smaller outcome based projects, instead of signing up large long term deals with Bigtechs, said Sachin Seth, Partner and Fintech Leader, Advisory Services, EY India. ... “Large IT companies still manage the core engines for the bank, they understand the bank’s security and regulatory requirements and have tailored their systems to suit these needs over the years. Fintech companies too, as the business case, grows need to invest in these areas. The successful ones will eventually become mid- to large-sized companies, while hopefully retaining their innovation DNA, said Axis’ Anand. While the competition large IT companies are seeing from fintech start-ups will only get fiercer, banking industry experts said that there is a strong need for collaboration. “Fintechs are nimble companies that think innovation first. However, they are not as well equipped to deploy the products. Fintech companies can drive innovation, but the comercialisation is better managed by bigtech companies,” said BoB’s Handa.


The 4 phases of digital transformation: a roadmap to Intelligent Automation

You’ve reached the end the road in outsourcing. You’ve been dinged by potholes of legacy systems and your smartest people are too busy struggling under the load of paperwork. You suspect that there’s only one way to get past these roadblocks, and that’s to start a whole new journey. Next stop: Intelligent Automation. The only thing is that you have no idea of what you’ll encounter along the way… The good news is, there are people who do. WorkFusion’s Client Strategy and Transformation team, which focuses on strategic advice and programmatic enablement for enterprises who are embarking on robotic process automation initiatives, has been down this road and around the block a few times already. They have seen patterns emerge and learned from their experiences. Which is why they wrote The 4 Phases of Digital Transformation: The Intelligent Automation Maturity Model. This complimentary 10-page eBook by WorkFusion will help you determine the best strategy for your operation by mapping each of the four stages of maturity that are relevant for most organizations.


The Brilliant Ways UPS Uses Artificial Intelligence, Machine Learning And Big Data


UPS developed its chatbot, UPS Bot, in house and released it for use just three months after the idea was born. This AI-enabled tool mimics human conversation and can respond to customer queries such as “Where is the nearest UPS location?” and can track packages and give out shipping rates. Customers can ask the bot questions either through text or voice commands through mobile devices, social media channels and virtual assistants such as Alexa and Google Assistant. The UPS Bot is able to recognize these requests and then takes the appropriate steps to complete them. The more “conversations” the bot has, the more learning it experiences to take the appropriate action in the future. During its peak period, UPS provided more than 137 million UPS My Choice alerts—the free system that lets residential customers decide “how, where and when home deliveries occur.” The chatbot is integrated with the UPS My Choice system, so customers are able to obtain information about their incoming packages and deliveries without providing a tracking number.


How Machine Learning Is Changing the World -- and Your Everyday Life

How Machine Learning Is Changing the World -- and Your Everyday Life
Computers can be programmed to determine individual study plans, specific to each student's needs. Algorithms can analyze test results, drastically reducing the time teachers spend in their leisure time on grading. A student's attendance and academic history can help determine gaps in knowledge and learning disabilities. These applications won't necessarily translate to a teacher-less classroom, but will facilitate the teaching and learning environments to enhance the outcomes and ease the burden on both teacher and student. Legal firms are increasingly turning to machine learning to process massive amounts of data related to legal precedents. J.P. Morgan, for example, uses a software program dubbed COIN to review documents and previous cases in seconds that would otherwise take 360,000 hours. As with our teachers above, it's unlikely machine learning or AI will replace lawyers any time soon, given the necessity of rebuttal and human logic / appeal, but the incorporation of machine learning will surely reduce the time taken to put together a case, and it could expedite trials, speeding up the processes of the court.


How BuzzFeed Migrated from a Perl Monolith to Go and Python Microservices


The new microservices are developed using Python as the main language with Go for the more performance sensitive components. BuzzFeed’s engineering team have found that the two languages are very complementary and it is relatively straightforward for individual developers to switch from one to the other as appropriate. At the time of writing they have around 500 microservices in stage and production environments on AWS. They break-down their services using something that sounds somewhat similar to SCS; the home page on buzzfeed.com is one service, news pages are a separate service, as are author pages and so on. One challenge the team faced was with routing requests to the correct backend applications. Fastly, their CDN provider, has the ability to programmatically define behavioural logic at the edge using a C based programming language called VCL, and initially the engineering team were writing all their routing logic in VCL directly. However, they found that as the configuration became more and more complex so making changes became more difficult, and being able to adequately test their configuration much more important. Mark McDonnell, a Staff Software Engineer at BuzzFeed, told InfoQ that


Serverless development with Node.js, AWS Lambda and MongoDB Atlas

The developer landscape has dramatically changed in recent years. It used to be fairly common for us developers to run all of our tools (databases, web servers, development IDEs…) on our own machines, but cloud services such as GitHub, MongoDB Atlas and AWS Lambda are drastically changing the game. They make it increasingly easier for developers to write and run code anywhere and on any device with no (or very few) dependencies. A few years ago, if you crashed your machine, lost it or simply ran out of power, it would have probably taken you a few days before you got a new machine back up and running with everything you need properly set up and configured the way it previously was. With developer tools in the cloud, you can now switch from one laptop to another with minimal disruption. However, it doesn’t mean everything is rosy. Writing and debugging code in the cloud is still challenging; as developers, we know that having a local development environment, although more lightweight, is still very valuable.


Focus More On Conceptual Knowledge To Be A Successful Data Scientist

The trend is obviously increasing with many recruiting senior management positions in analytics. Having said that, it is still behind western countries. For example, In 2016 MIT Sloan management review reported that 54 percent of Fortune 1000 companies had Chief Data Office, but the corresponding number in India is much lower. This may be due to the fact that the number of analytics projects in India is still lower compared to western markets. However, with the government policies to use AI in many government initiatives, this could change. At a lower level, it is business intelligence skills such as reporting, dashboard creation. This skill still forms the majority of recruiting by the Indian companies. At the higher level of AI, it is natural language processing (NLP) and other forms of unstructured data analysis such as image processing using deep learning algorithms lead the hiring trend. Data Strategy Officers becoming common among many companies.



Quote for the day:


"The art of communication is the language of leadership." -- James Humes


Daily Tech Digest - June 23, 2018

$4.3 Million HIPAA Penalty for 3 Breaches

$4.3 Million HIPAA Penalty for 3 Breaches
"Despite the encryption policies and high risk findings, MD Anderson did not begin to adopt an enterprisewide solution to implement encryption of ePHI until 2011, and even then, it failed to encrypt its inventory of electronic devices containing ePHI between March 24, 2011, and January 25, 2013," the statement adds. The administrative law judge agreed with OCR's arguments and findings and upheld OCR's penalties for each day of MD Anderson's noncompliance with HIPAA and for each record of individuals breached, OCR notes. "OCR is serious about protecting health information privacy and will pursue litigation, if necessary, to hold entities responsible for HIPAA violations," says OCR Director Roger Severino. "We are pleased that the judge upheld our imposition of penalties because it underscores the risks entities take if they fail to implement effective safeguards, such as data encryption, when required to protect sensitive patient information." OCR alleges that MD Anderson claimed that it was not obligated to encrypt its devices and asserted that the ePHI at issue was for "research," and thus was not subject to HIPAA's nondisclosure requirements.



Cultural, leadership issues plague many digital transformation efforts

Digital transformation is a complex effort from the internal perspective, and to make it even more challenging, it shouldn’t appear to be that way from the customer perspective, explains James Campbell, practice lead with experience design at Shalom. “Some of the biggest challenges with internal forces – like buy-in and commitment from all functional areas, sponsorship from executives, board and other governing bodies, and willingness to redefine KPI’s – actually pale in comparison to the effort required to prevent your digital transformation from becoming your customer’s problem, too,” Campbell says. When organizations fail to realize that, poorly implemented digital transformation can cause lost sales, loyalty and public reputation, “and it can make or break the type of effort that will differentiate the companies of today from the companies of tomorrow,” Campbell says. “While many industries are considering digital transformation, CEOs in asset-intensive industries are less likely to consider IT a priority, and low levels of historic investment may have created an environment with poor digital readiness,” explains Allen E. Look


How The TOGAF® Standard Enables Agility


Top-down, The Enterprise Strategic Architecture provides a high-level view of the area of the enterprise impacted by change; the Capability Architectures are detailed descriptions of (increments of) capability to be delivered. These are Sprints in the agile world. They are sufficiently detailed to be handed to developers for action. As the diagram shows, sprints can occur in parallel. A key consideration is achieving is that the sprints are time-boxed and aimed at addressing a set of bounded objectives. The Capability architectures should be tightly scoped to address those objectives. The higher levels show the relationships and dependencies between capability increments and provide the framework for the management of risk of unanticipated consequences. They provide the information needed to assess the overall impact of a proposed change. Bottom-up, there is feedback from the implementation of capability increments which influences the higher levels. The enterprise strategic architecture may evolve as a result of experience gained from the deployment of each and every capability increment.


Silver Peak SD-WAN adds service chaining, partners for cloud security


These partnership additions build on Silver Peak's recent update to incorporate a drag-and-drop interface for service chaining and enhanced segmentation capabilities. For example, Silver Peak said a typical process starts with customers defining templates for security policies that specify segments for users and applications. This segmentation can be created based on users, applications or WAN services -- all within Silver Peak SD-WAN's Unity Orchestrator. Once the template is complete, Silver Peak SD-WAN launches and applies the security policies for those segments. These policies can include configurations for traffic steering, so specific traffic automatically travels through certain security VNFs, for example. Additionally, Silver Peak said customers can create failover procedures and policies for user access. Enterprises are increasingly moving their workloads to public cloud and SaaS environments, such as Salesforce or Microsoft Office 365. Securing that traffic -- especially traffic that travels directly over broadband internet connections -- remains top of mind for IT teams, however.


Musk says Tesla data leaked by disgruntled employee

“Could just be a random event, but as Andy Grove said, ‘Only the paranoid survive,’” Musk wrote Monday, referring to the late chairman and CEO of Intel Corp. “Please be on the alert for anything that’s not in the best interests of our company.” Tesla can ill afford manufacturing setbacks now. It’s racing to meet a target to build 5,000 Model 3s a week by the end of this month, a goal Musk told shareholders on June 5 that the company was “quite likely” to achieve. The company’s forecasts for generating profit and cash in the third and fourth quarters of this year are based on this objective, and falling short would reignite concerns about whether the company may need to raise more capital. A Tesla spokeswoman confirmed the authenticity of the Monday email, which CNBC reported first. Smoldering in an air filter in the welding area of Tesla’s body line was extinguished in a matter of seconds, she said. Production has resumed and there were no injuries or significant equipment damage, she added.


Three-month-old Drupal vulnerability is being used to deploy cryptojacking malware

The researchers note that this particular attack uses interesting techniques, including hiding behind the Tor network to evade detection. The malware also checks to see whether a previous miner is running on the system before installing the payload via a series of shell scripts and executables. As well as hiding behind the Tor network, the attacker or attackers are also using a Virtual Private Network (VPN) in an effort to hide their tracks, but there is a linked IP address. Researchers say there have been hundreds of attempts to conduct attacks via this IP over the last month, although not all involve the Drupal vulnerability: some are related to the Heartbleed vulnerability. There's no indication as to the exact number of cryptojacking attacks that have been conducted using the Drupal vulnerability, but it serves to remind organisations that they should be patching vulnerabilities -- especially those deemed critical -- in order to protect against attacks. "Patching and updating the Drupal core fixes the vulnerability that this threat exploits. 


FBI warns of increasing ransomware, firmware attacks


Along with those newer types of attacks, the tried-and-true insider threat also isn’t going away soon, said Morrison, speaking at the Hewlett Packard Enterprise Discover conference in Las Vegas on Wednesday. The organizations taking advantage of those attacks are increasingly sophisticated and well-funded criminal groups. “We need to get off the mindset that criminals are living in their basement, that a cybercriminal is some kid that’s living in the basement of their mom’s house,” Morrison said. “These are fully functional, 24/7 data center operations, operating in countries where they have some kind of asylum, in many cases.” About 75 percent of the cyberattacks against companies in the United States come from organized crime groups, Morrison added. “Understand that’s the magnitude of what you’re facing,” he told the audience. In some cases, these criminal organizations also have ties to nation states. “We’re seeing this blending of nation state and criminal organizations,” Morrison said. After all, “why would a nation state take a chance of being exposed when they can just hire a criminal group?”


Early detection of compromised credentials can greatly reduce impact of attacks


There is a growing industry in the cybercrime ecosystem focused on obtaining valid login credentials using multiple mechanisms and tools. These tools nowadays can be cheaply acquired in the underground, darknet markets and forums. And you don’t have to be a highly seasoned cybercriminal to launch an attack. According to our credential detection data, since the start of 2018 up until the end of May, there has been a 39 percent increase in the number of compromised credentials that we have detected from Europe and Russia, compared to the same period in 2017. In fact, Blueliv’s observations conclude that Europe and Russia make up half of the world’s credential theft victims. We also found that when we remove Russia from the dataset, the growth figure for European theft victims jumps to 62 percent. These European growth figures tracked by us are surprisingly higher than North America’s, which recorded a decline by almost half in this period. We think that these cybercriminal success rates mean that the credential theft industry is growing in the European region, both in innovation and scope. We believe there are several reasons for this.


Blockchains on mobile, IoT devices: Can fog computing make it happen?

Blockchains on mobile, IoT devices: Can fog computing make it happen?
Edge computing is a way to bring the processing center closer to the source of data, or the “edge,” significantly cutting down costs and processing time by tapping on a network of computers who are offering their storage and processing power to the network’s clients in exchange for pay. Edge computing doesn’t necessarily need to be blockchain-based, but in several ways, the two technologies overlap. In essence, they’re like blockchain miners, except anyone can use their processing power for any process at any given time—it could be mining, scientific calculations, video streaming, or anything else. Unlike blockchains, edge computing services are not limited to a specific use case. The quickest differentiator I’ve seen between edge and fog is from Cisco: “Fog computing is a standard that defines how edge computing should work, and it facilitates the operation of compute, storage and networking services between end devices and cloud computing data centers. Additionally, many use fog as a jumping-off point for edge computing.” Fog computing is another emerging technology that can make blockchains even more powerful than they already are.


CISO careers: Several factors propel high turnover


A CISO's role today is primarily risk management, where they are more of an advisor and strategist, while being technologist behind the scenes. Establishing a security risk steering committee with other C-suite members is one of several effective ways to engage with business leaders. The old ways of instilling fear, uncertainty and doubt to drive support for additional budget and large projects are long gone. The CISO should be perceived as a business partner, adaptable to the business changes and threats, a team player, and have a continuous improvement mindset across people, process and sometimes technology needs. Additionally, the CISO should be focused on self-improvement -- a coach and/or mentor are essential to becoming a very effective senior leader. Athletes at the highest levels always have a coach, often many coaches, from experts in their sport to nutritionists that keep them as healthy as possible. Why shouldn't CISOs? The CISO has one of the most challenging roles and should have both a senior business leader and an industry peer as mentors and, if the organization supports it, an executive coach to improve their leadership and organizational influence skill set.



Quote for the day:


"It is easier to act yourself into a new way of thinking, than it is to think yourself into a new way of acting." -- A.J. Jacobs


Daily Tech Digest - June 22, 2018

Oracle now requires a subscription to use Java SE
Oracle has revamped its commercial support program for Java SE (Standard Edition), opting for a subscription model instead of one that has had businesses paying for a one-time perpetual license plus an annual support fee. The subscriptions will be available in July 2018. (Personal, noncommercial usage continues to be free and not require a subsctiptoion.) Called Java SE Subscription, the new program for mission-critical Java deployments provides commercial licensing, with features offered such as the Advanced Java Management Console. Also, Oracle Premier Support is included for current and previous Java SE releases. It is required for Java SE 8, and includes support for Java SE 7.  ... The price is $25 per month per processor for servers and cloud instances, with volume discounts available. For PCs, the price starts at $2.50 per month per user, again with volume discounts. One-, two-, and three-year subscriptions are available. Oracle has published the terms of its new Java SE Subscription plans. The previous pricing for the Java SE Advanced program cost $5,000 for a license for each server processor plus a $1,100 annual support fee per server processor, as well as $110 one-time license fee per named user and a $22 annual support fee per named user


Making intelligence intelligible with Dr. Rich Caruana

Sometimes, it’s just a black box because it’s protected by IP. So, many people will have heard of this model that is used for recidivism predictions. So, this model was created by a company, and the model is a pay-for-use model. And the model is just not something that’s known to us, because we’re not allowed to know. By law, it’s something the company owns, and the courts have, several times, upheld the right of the company to keep this model private. So maybe you’re a person who this model has just predicted you’re a high-risk of committing another crime and because of that, maybe you’re not going to get parole. And you might say, “Hey, I think I have a right to know why this model predicts that I’m high-risk.” And so far, the courts have upheld the right of the company that created the model to keep the model private and not to tell you in detail why you’re being predicted as high or low risk. Now, there are good reasons for this. You don’t necessarily want people to be able to game the model. And in other cases, you really want to protect the company who went to the expense and risk of generating this model. But that’s a very complex question.


A QA team finds continuous testing benefits worth the effort


Continuous integration was born around the idea that the earlier you find a bug, the cheaper it is to fix. But this priority could become problematic if there is not an easy, fast and reliable way to assess whether changes are ready to be integrated and then ready to go to production. When you adopt continuous testing as a key practice, your code must always be ready for integration, according to Isabel Vilacides, quality engineering manager at CloudBees. "Tests are run during development and on a pull request basis," she explained. "Once it's integrated, it's ready to be delivered to customers." Continuous testing doesn't stop at functional testing; it involves considering nonfunctional aspects, such as performance or security. The process aims to prevent bugs through code analysis, before risks become apparent in production. Continuous testing requires cohesive teams, where quality is everyone's responsibility, instead of separate teams for development, testing and release. The approach also makes automation a priority and shifts quality to the left, making it an earlier step in the pipeline. 


CISO soft skills in demand as position evolves into leadership role

In the old days, the CISO, I was told, was just an advisory position. Now, my roles, the roles I've held in the last seven years or so, are much more than advisory. Advisory is part of it for sure, but there's a lot more leadership involved. I see it becoming more and more a position reporting directly to the CEO, a truly C-level position. I see CISOs have vice presidents reporting to them going forward. And I think my job as being increasingly described as chief ethicist, asking: What's the right thing to do, and not just what's the most secure thing to do? What's the proper behavior? What do customers expect from us? If a compromise has to be made, what's the most ethical compromise to make? ... It's important for at least two different reasons. One, from a practical perspective, I've talked a lot about the skills gap. If we're blocking 50% of the planet from joining this career path, we're really contributing to our biggest challenge. Then the other part: Women across the globe are economically oppressed, and information security is a lucrative field. I want to get women into the information security field so they can be financially independent and make a good living.


It’s not easy to move from a private cloud to a hybrid cloud

It's not easy to move from a private cloud to a hybrid cloud
Sadly, the move from a private cloud to a public cloud is not easy, whether you go hybrid or all-public. The main reason is that there is no direct mapping from private cloud services, which are the basics (storage, compute, identity access management, and database) to public cloud services which have those basic services plus thousands of other higher-end services. Private clouds today are where public clouds were in 2010. Public clouds today are in 2018. You’re in essence migrating over a ten-year technology advance as you move your applications between private and public. Complexity also comes in when you’ve already coupled your applications to the services in the private cloud, which is typically going to be OpenStack. There are very few OpenStack deployments on public clouds, none of which are the Big Three providers (Amazon Web Services, Google Cloud Platform, and Microsoft Azure). That means you can’t do an A-to-A mapping of the cloud services from your private cloud to the public clouds. And that in turn means you need to remap these services to similar services on the public cloud.


The rise of active defense in cybersecurity strategies

As in any game against an adversary, you need both defensive and offensive strategies. An active defense adds the offense-driven actions so that organizations can proactively detect and derail would-be attackers before they have time to get comfortable within the network, stopping attacks early and gathering the threat intelligence required to understand the attack and prevent a similar recurrence. Sometimes active defense includes striking back at an attacker, but this is reserved for military and law enforcement that have the resources and authority to confirm attribution and take appropriate action. An active defense strategy changes the playbook for cybersecurity professionals by combining early detection, substantiated alerts and information sharing to improve incident response and fortify defenses. It is no longer “a nice to have,” but instead is becoming more widely accepted as a “must have” as prevention-only tactics are no longer enough. With well-orchestrated breaches continuously making headlines, an active defense strategy is becoming a priority. 


This new Windows malware wants to add your PC to a botnet - or worse

The malware comes equipped with three different layers of evasion techniques which have been described by the researchers at Deep Instinct who uncovered the malware as complex, rare and "never seen in the wild before". Dubbed Mylobot after a researcher's pet dog, the origins of the malware and its delivery method are currently unknown, but it appears to have a connection to Locky ransomware -- one of the most prolific forms of malware during last year. The sophisticated nature of the botnet suggests that those behind it aren't amateurs, with Mylobot incorporating various techniques to avoid detection. They include anti-sandboxing, anti-debugging, encrypted files and reflective EXE, which is the ability to execute EXE files directly from memory without having them on the disk. The technique is not common and was only uncovered in 2016, and makes the malware ever harder to detect and trace. On top of this, Mylobot incorporates a delaying mechanism which waits for two weeks before making contact with the attacker's command and control servers -- another means of avoiding detection.


Plan Now For Your Migration To Windoes Server 2019

Plan now for your migration to Windows Server 2019
Web applications running on IIS are easy to test because most code is just HTML, .Net or other Web app that runs on top of the IIS/Web platform. Setting up a Windows Server 2019 server with IIS and then uploading Web code to the server is a quick-and-easy way to confirm that the Web app works and can easily be the first 2019 server added to an environment. Fileservers are also good early targets for migrating old to new. Many times, fileservers have gigabytes or even terabytes of data to copy across, and fileservers are also the things that may not have been upgraded recently.  In early-adopter environments, many times the old fileservers are still running Windows Server 2008 (which goes end-of-life in the summer of 2019) and could use an upgrade. File migration tools like Robocopy or a drag-and-drop between Windows Explorer windows can retain tree and file structures as well as retain access permissions as content is copied between servers. Tip: After content is copied across, new servers can be renamed with the old server name, thus minimizing interruption of user access.


Strategies for Decomposing a System into Microservices

Sometimes you will find that they have different mental models for the same business concepts or use the same terms to describe different concepts and if so, it’s an indication that these concepts belong to different bounded contexts. From the beginning Khononov and his team used these discovered boundaries to define services, with each boundary becoming a service. He notes though that these services represent quite wide business areas, sometimes resulting in a bounded context covering multiple business subdomains. As their next step, they instead used these subdomains as boundaries and created one service for each business subdomain. In Khononov’s experience, having a one-to-one relationship between a subdomain and a service is a quite common approach in the DDD community, but they didn’t settle for this, instead they continued and strived for even smaller services. Looking deeper into the subdomains, they found business entities and processes and extracted these into their own services. From the beginning this final approach failed miserably, but Khononov points out that in later projects it has been more successful.


Why you should train your staff to think securely

Far too often, information security teams have only the broadest overview of the wider workings of their organisations. Other staff, meanwhile, tend to have little knowledge of or interest in information security practices, which they often believe have been designed to hinder their day-to-day work. However, when any employee with Internet access can jeopardise the entire organisation with a single mouse-click, it should be clear that the responsibility for information security lies with every member of staff and that security practices need to be embedded in the working practices of the whole business. Insider attacks are not limited to the malicious actions of rogue staff. The term also refers to the unwitting behaviour of improperly trained employees, or to the exploitation of inappropriately applied privileges and poor password practices by malicious outsiders. Staff need regular training on information security practices to ensure they’re aware of the risks they face on a daily basis. The vast majority of malware is spread by drive-by downloads and phishing campaigns, both of which exploit human error.



Quote for the day:


"Trust is one of the greatest gifts that can be given and we should take creat care not to abuse it." --Gordon Tredgold