Daily Tech Digest - April 18, 2019

Automation is a machine and a machine only does what it is told to do. Complicated tests require a lot of preparation and planning and also have certain boundaries. The script then follows the protocol and tests the application accordingly, Ad-hoc testing helps testers to answer questions like, “What happens when I follow X instead of Y?” It helps the tester to think and test using an out of the box approach, which is difficult to program in an automation script. Even visual cross-browser testing needs a manual approach. Instead of depending on an automated script to find out the visual differences, you can check for the issues manually either by testing on real browsers and devices or even better, by using cloud-based, cross-browser testing tools, which allow you to test your website seamlessly across thousands of different browser-device-operating system combinations. ... Having a manual touch throughout the testing procedure instead of depending entirely on automation will ensure that there are no false positives or false negatives as test results after a script is executed.


Understanding the key role of ethics in artificial intelligence

It has become faddish to talk about the important of ethical AI and the need for oversight, transparency, guidelines, diversity, etc., at an abstract and high-level. This is not a bad thing, but often assumes that such ‘talk’ is concomitant in addressing the challenges of ethical AI. The facts, however, are much more complex. For example, guidelines themselves are often ineffective (a recent study showed the ACM’s code of ethics had little effect on the decision making process of engineers). Moreover, even if we agree on how an AI system should behave (not trivial) implementing specific behavior in the context of the complex machinery that underpins AI is extremely challenging. ... Ethics in AI is extremely important given the proliferation of AI systems in consequential areas of our lives: college admissions, financial decision-making systems, and what the news we consume on Facebook and other media sites.


Researchers: Malware Can Be Hidden in Medical Images
The "flaw" discovered in the DICOM file format specification could allow attackers to embed executable code within DICOM files to create a hybrid file that is both a fully functioning Windows executable as well as a specification-compliant DICOM image that can be opened and viewed with any DICOM viewer, the report says. "Such files can function as a typical Windows PE file while maintaining adherence to the DICOM standard and preserving the integrity of the patient information contained within," according to the report. "We've dubbed such files, which intertwine executable malware with patient information, PE/DICOM files." By exploiting this design flaw, the report says, attackers could "take advantage of the abundance and centralization of DICOM imagery within healthcare organizations to increase stealth and more easily distribute their malware, setting the stage for potential evasion techniques and multistage attacks." The fusion of fully functioning executable malware with HIPAA protected patient information adds regulatory complexities and clinical implications to automated malware protection and typical incident response processes, the researchers say.


Sometimes, rather than look at problem areas in the business, he says the team focuses on exploring pure technology. As an example, Chatrain says Generative Adversarial Networks (GANs) can benefit from algorithms that generate fake data, such as fake pictures of people who do not actually exist. “We dedicate part of our exploratory time to such techniques and technologies and then look for applications,” he says. Looking at a practical example of how a fake data algorithm could be deployed, he says: “With GDPR and the need to feed test systems with high volumes of realistic data, we used [synthetic data algorithms] to create fake travellers with travel itineraries.” Such synthetic data is indistinguishable from the data that represents the travel plans of real people, and this data can be used to test the robustness of systems at Amadeus. “Today, no one tests the systems if we have twice as much data,” says Chatrain. But this is possible if data for a vast increase in passenger numbers is simply generated via a synthetic data algorithm. Beyond being used to test application software, he says synthetic data also enables Amadeus to anonymise the data it shares with third parties. “We are not allowed to share [personal] data, but we still need a business partnership.”


What is project portfolio management? Aligning projects to business goals

What is project portfolio management? Aligning projects to business goals
With PPM, not only are project, program, and portfolio professionals able to execute at a detailed level, but they are also able to understand and visualize how project, program, and portfolio management ties to an organization’s vision and mission. PPM fosters big-picture thinking by linking each project milestone and task back to the broader goals of the organization. ... Capacity planning and effectively managing resources is largely dependent on how well your PMO executes its strategy and links the use of resources to company-wide goals. It is no secret that wasted resources is one of the biggest issues that companies encounter when it comes to scope creep. PPM decreases the chances of wasted resources by ensuring resources are allocated based on priority and are being effectively sequenced and wisely leveraged to meet intended goals. ... PMOs that communicate to project teams and other stakeholders, such as employees, why and how project tasks are vital in creating value increase the likelihood of a higher degree of productivity. 


Startup MemVerge combines DRAM and Optane into massive memory pool
Optane memory is designed to sit between high-speed memory and solid-state drives (SSDs) and acts as a cache for the SSD, since it has speed comparable to DRAM but SSD persistence. With Intel’s new Xeon Scalable processors, this can make up to 4.5TB of memory available to a processor. Optane runs in one of two modes: Memory Mode and App Direct Mode. In Memory Mode, the Optane memory functions like regular memory and is not persistent. In App Direct Mode, it functions as the SSD cache but apps don’t natively support it. They need to be tweaked to function properly in Optane memory. As it was explained to me, apps aren’t designed for persistent storage because the data is already in memory on powerup rather than having to load it from storage. So, the app has to know memory doesn’t go away and that it does not need to shuffle data back and forth between storage and memory. Therefore, apps natively don’t work in persistent memory.


crypto currency circuit nodes digital wallet bitcoin blockchain
The group hopes to turn out the first iteration of its Token Taxonomy Framework (TTF) later this year; afterward it plans work to educate the blockchain community and collaborate through structured Token Definition Workshops (TDW) to define new or existing tokens. Once defined, the taxonomy can be used by businesses as a baseline to create blockchain-based applications using digital representations of everything from supply chain goods to non-fungible items such as invoices. "We'll do some workshops...to validate and make sure we have the base definition of a non-fungable token," said Marley Gray, Microsoft's principal architect for Azure blockchain engineering and a member of the EEA's Board of Directors. "As we go through workshops, we will probably find we should add this attribute or this clarification or this example that helps someone understand it." The organizations that have agreed to participate in the standardization effort include Accenture, Banco Santander, Blockchain Research Institute, BNY Mellon, Clearmatics, ConsenSys, Digital Asset, EY, IBM, ING, Intel, J.P. Morgan, Komgo, R3, and Web3 Labs.



Each micro-component runs an independent processing flow that performs a single task. For example, if your application has a network layer, you may also have Network Receiver and Network Sender components which only have the responsibility for receiving/sending data through the network. If your application has a logging layer it might also be implemented as an independent micro-component. Each micro-component defines its own interface of outgoing/incoming events, and the internal processing flow for them. For example, the Network Receiver might define the OutgoingClientRequests channel, which would be populated with newly received requests from the users. Interfaces, as you might guess, are implemented on top of channels, so the communication flows look very obvious, predictable, and easily maintainable in this perspective. The core’s role is to connect various outgoing channels with various incoming channels and to enable data flow between various micro-components.


Cisco Talos details exceptionally dangerous DNS hijacking attack

man in boat surrounded by sharks risk fear decision attack threat by peshkova getty
Talos noted “with high confidence” that these operations are distinctly different and independent from the operations performed by DNSpionage. In that report, Talos said a DNSpionage campaign utilized two fake, malicious websites containing job postings that were used to compromise targets via malicious Microsoft Office documents with embedded macros. The malware supported HTTP and DNS communication with the attackers. In a separate DNSpionage campaign, the attackers used the same IP address to redirect the DNS of legitimate .gov and private company domains. During each DNS compromise, the actor carefully generated Let's Encrypt certificates for the redirected domains. These certificates provide X.509 certificates for Transport Layer Security (TLS) free of charge to the user, Talos said. The Sea Turtle campaign gained initial access either by exploiting known vulnerabilities or by sending spear-phishing emails. Talos said it believes the attackers have exploited multiple known common vulnerabilities and exposures (CVEs) to either gain initial access or to move laterally within an affected organization.


Wipro Detects Phishing Attack: Investigation in Progress

Wipro Detects Phishing Attack:  Investigation in Progress
Wipro's systems were seen being used as jumping-off points for digital phishing expeditions targeting at least a dozen Wipro customer systems, the blog says. "Wipro's customers traced malicious and suspicious network reconnaissance activity back to partner systems that were communicating directly with Wipro's network," according to the blog. In a statement, Wipro says: "Upon learning of the incident, we promptly began an investigation, identified the affected users and took remedial steps to contain and mitigate any potential impact." The firm tells ISMG that none of its customers' credentials have been affected, as was alleged in the blog. Some security experts, however, say Wipro may be the victim of a nation-state sponsored attack. "It is most likely by a nation-state. They use this modus operandi to breach a vendor network first and through that route the attack their customers," says a Bangalore-based security expert, who did not wish to be named. "That is because customers will consider Wipro's network safe.



Quote for the day:


"A good leader leads the people from above them. A great leader leads the people from within them." -- M.D. Arnold


Daily Tech Digest - April 17, 2019

What SDN is and where it’s going

What SDN is and where it̢۪s going
The driving ideas behind the development of SDN are myriad. For example, it promises to reduce the complexity of statically defined networks; make automating network functions much easier; and allow for simpler provisioning and management of networked resources, everywhere from the data center to the campus or wide area network. Separating the control and data planes is the most common way to think of what SDN is, but it is much more than that, said Mike Capuano, chief marketing officer for Pluribus. “At its heart SDN has a centralized or distributed intelligent entity that has an entire view of the network, that can make routing and switching decisions based on that view,” Capuano said. “Typically, network routers and switches only know about their neighboring network gear. But with a properly configured SDN environment, that central entity can control everything, from easily changing policies to simplifying configuration and automation across the enterprise.” ... Typically in an SDN environment, customers can see all of their devices and TCP flows, which means they can slice up the network from the data or management plane to support a variety of applications and configurations, Capuano said.


Use of AI in wealth management must be applied smartly

“AI can offer a solution to these problems by helping to automate on-boarding processes, provide smarter access to data and create new customer experiences. However, it’s critical any implementation be undertaken smartly. It shouldn’t be a case of automating for automation’s sake. Because of this we see the use of AI best applied in small-steps. “This starts with automating and streamlining manual processes, such as onboarding a new client. This could include all forms of engagement from initial communications, anti-money laundering checks, risk profiling, and all the legal documentation in between. Additionally, by using intelligent information management solutions, staff have the means to simplify how they access, secure, process and collaborate on documentation. Doing so will aid productivity, enabling staff to find and access information across their systems much faster so they can build stronger relationships with their clients.


Security Is Key To The Success Of Industry 4.0

uncaptioned
There is often a perception among manufacturers that cloud computing is less secure than managing data on-site. The reality is that the opposite is true. Network security is closely related to physical access. After all, in an on-site server room, anyone could gain access , pop in a USB stick, and steal sensitive information. Conversely, cloud vendors store data in locations locked down with security guards and numerous physical barriers between any would-be hacker and the target server. Additionally, the cloud offers more network resilience. Businesses that rely on on-premise servers face exposure and operational risk during an act of force majeure, such as a fire or natural disaster. With the cloud, that risk is spread over multiple secure locations, significantly reducing the chance of disruption. Security is an ongoing concern; there will always be new vulnerabilities. Many of the biggest hacks – such as the Petya malware virus that first appeared in 2016 – targeted old Windows technology, which is why it is key to ensure the software is always up to date.


C-Suite: The New Main Target of Phishing

Evolving phishing attacks mean that criminals are continually looking for new ways to completely mask their malicious URLs, especially on mobile devices. They either hide them behind a page like Google Translate that users are already familiar with or completely trick users with custom web fonts and altered characters. One of the latest approaches is to create an Office 365 meeting invite that contains quiz buttons or a poll asking recipients to pick the topic or date for the next meeting; employees that end up clicking are presented with a fake Office 365 login page where they enter their O365 credentials and then lose control over their email account. Another approach is an email that comes from someone you know with a request to take a look at something for them. When you click on the link or attachment, malware installs on your system, takes over your email client, and then emails the same message from you to all your contacts. All is not lost, however. There is a way to help prevent and thwart these attacks. You need a security awareness program that instils a culture of security throughout your organization starting in the boardroom and leading by example.



While this bill remains on the House and Senate floor, there are some ways that state and local governments can begin securing their systems. The first step should be an audit, allowing key decision-makers to get on the same page about the status of their security. This audit should include secretaries of state, members of the academic community and all cybersecurity staff. Everyone should review the cybersecurity controls and the threat vectors that have been exploited in local systems. Improperly informed stakeholders are the greatest vulnerability. U.S. election security needs greater state-by-state alignment. These systems are managed by a hodgepodge of systems that vary from state to state, including paper ballots, electronic screens and Internet voting. Before local elections, midterms and the 2020 presidential election, state officials need to meet with their Boards of Elections and document their end-to-end election process with all of its systems, dependencies and interfaces.


Surviving the existential cyber punch

Top-notch organisations understand the threat environment well. They invest time and effort to maintain situational awareness as to who also values their information and could serve as a threat. They understand that threats may come from many vectors including the physical environment, natural disasters, or human threats. Further, they understand that human threats include such entities as vandals, muggers, burglars, spies, saboteurs, and careless, negligent or indifferent personnel in their own ranks. They invest in information sharing organisations, subscribe to threat information sources, and share their own observations as part of the Cyber Neighbourhood Watch construct. These organisations also know the importance of maintaining positive relationships with the cyber divisions of law enforcement organisations. Even before you have been attacked, your local cyber law enforcement organisation can serve as a rich source of threat intelligence that can help you better manage your cyber risk exposure.


Should that be a Microservice? Keep These Six Factors in Mind


If a module needs to have a completely independent lifecycle, then it should be a microservice. It should have its own code repository, CI/CD pipeline, and so on. Smaller scope makes it far easier to test a microservice. I remember one project with an 80 hour regression test suite! Needless to say, we didn’t execute a full regression test very often. A microservice approach supports fine-grained regression testing. This would have saved us countless hours. And we would have caught issues sooner. ... If the load or throughput characteristics of parts of the system are different, they may have different scaling requirements. The solution: separate these components out into independent microservices! This way, the services can scale at different rates. Even a cursory review of a typical architecture will reveal different scaling requirements across modules. Let’s review our Widget.io Monolith through this lens.


Strong security defense starts with prioritizing, limiting data collection

As cybercrime, user fraud and other security threats become more prevalent and detrimental, the ability to confidently know who you’re dealing with online has become ubiquitous, but what most companies tend to overlook is the responsibility and liability that they automatically assume when they collect and store personal data in order to validate their constituents. As a result, some businesses hold large volumes of personal data because they believe it’s necessary for comprehensive identity and credential verification, but this practice can be risky, especially for companies with weak or limited data protection protocols in place. Data breaches have costly repercussions, including loss of customers, compromised intellectual property, loss of brand trust and, of course, meaningful revenue declines that result, but regulatory penalties can be the most expensive of all consequences. For example, violating GDPR’s strict rules around data privacy can warrant fines of up to €20M, or 4 percent of the worldwide annual revenue of a company.


How botnets pose a threat to the IoT ecosystem 


Botnets are particularly challenging because they evolve over time and new forms constantly emerge, one of which is TheMoon. Benjamin tells Computer Weekly: “Threat researchers at CenturyLink’s Black Lotus Labs recently discovered a new module of IoT botnet called TheMoon, which targets vulnerabilities in routers within broadband networks.” Benjamin explains that a previously undocumented module, deployed on MIPS devices, turns the infected device into a Socks proxy that can be sold as a service. “This service can be used to circumnavigate internet filtering or obscure the source of internet traffic as a part of other malicious actions,” he says.  Attackers are using botnets such as TheMoon for a range of crimes, including credential brute forcing, video advertisement fraud and general traffic obfuscation. “For example, our team observed a video ad fraud operator using TheMoon as a proxy service, impacting 19,000 unique URLs on 2,700 unique domains from a single server over a six-hour period,” says Benjamin.


Cryptocurrencies Will Never Replace Us, Cries Romanian Central Bank Official

Daianu went on to defend the state’s role in issuing currency saying that it was the ‘only possible last-resort lender’. In this regard, the central bank official implied that during a financial crisis, only the state can save the situation: In markets, the state is the only possible last-resort lender. When the banking system was saved, it wasn’t crypto banks that were saved. Central banks intervened by issuing base currency, which was followed by non-conventional measures. This statement is likely to get Daianu in trouble with crypto enthusiasts as the unhindered printing of money is what spawned cryptocurrencies as we know them today. The central bank official also revealed that centralized institutions are yet to understand the importance of the deflationary approach cryptocurrencies such as Bitcoin have taken. This was demonstrated by his statement that the central banks’ answer to cryptocurrencies is to issue a digital currency that can ‘multiply’!



Quote for the day:


"And the trouble is, if you don’t risk anything, you risk more." -- Erica Jong


Daily Tech Digest - April 16, 2019

IT pursues zero-touch automation for application support


Automation is a top goal, from application conception -- or selection, in the case of a third-party business application -- through adoption and use. Executive-level management wants zero-touch automation that controls every application, all the IT resources it runs on and every step of every development and operations process. Zero-touch automation, sometimes called ZTA, covers two specific goals: Sustain an infrastructure that supports applications, databases and workers, and accurately automate application mapping onto IT infrastructure. The former is about analytics and capacity planning, and the latter facilitates terms such as DevOps and orchestration. DevOps, both as technologies and cultural changes that drive faster, better software delivery and operations, predates advances in cloud computing and virtualization. Development teams would build something and turn it over to operations to run, without consideration for the operational deployment requirements.


Nutanix powers Manchester City Council’s IT


The council assessed Nutanix, HPE SimpliVity, HPE Synergy and the VxRail appliance from Dell-EMC and VMware. Farrington says it elected Nutanix running a supermicro appliance because “Nutanix offered the closest to a silver bullet – we could get everything from a single vendor”. In Farrington’s experience, HCI gives the council greater flexibility than traditional IT infrastructure. One benefit is a distributed storage fabric with thin provisioning, which enables the council to make the most of its storage capacity. “We have the ability to scale quickly. The ability to add another storage and compute device quickly is beneficial,” he says. “We also benefit from the deduplication and compression services that are built in.” HCI has also provided a way to bring together the support teams for Windows servers and storage. “I had six teams to look after the datacentre facility,” says Farrington. “Historically, we had two teams – one looked after our 900 Windows servers, the other looked after storage and backup. ...”


Top 10 Features to Look for in Automated Machine Learning


Feature engineering is the process of altering the data to help machine learning algorithms work better, which is often time-consuming and expensive. While some feature engineering requires domain knowledge of the data and business rules, most feature engineering is generic. Look for an automated machine learning platform that can automatically engineer new features from existing numeric, categorical, and text features. You will want a system that knows which algorithms benefit from extra feature engineering and which don’t, and only generates features that make sense given the data characteristics. ... It’s quite standard for machine learning software to train the algorithm on your data. After all, you wouldn’t want to manually do Newton-Raphson iteration would you? Probably not. But, often there’s still the hyperparameter tuning to worry about. Then you want to do feature selection, to improve both the speed and accuracy of a model. Look for an automated machine learning platform that uses smart hyperparameter tuning, not just brute force, and knows the most important hyperparameters to tune for each algorithm.


Machine Learning Widens the Gap Between Knowledge and Understanding


Given how imperfect our knowledge has always been, this assumption has rested upon a deeper one. Our unstated contract with the universe has been that if we work hard enough and think clearly enough, the universe will yield its secrets, for the universe is knowable, and thus, at least, somewhat pliable to our will. But now that our new tools, especially machine learning and the internet, are bringing home to us the immensity of the data and information around us, we’re beginning to accept that the true complexity of the world far outstrips the laws and models we devise to explain it. Our newly capacious machines can get closer to understanding it than we can, and they, as machines, don’t really understand anything at all. This, in turn, challenges another assumption we hold one level further down: The universe is knowable to us because we humans (we’ve assumed) are uniquely able to understand how the universe works. At least since the ancient Hebrews, we have thought ourselves to be the creatures uniquely made by God with the capacity to receive His revelation of the truth.


How Azure uses machine learning to predict VM failures


On average, disk errors start showing up between 15 and 16 days before a drive fails, and in the last 7 days before it fails reallocated sectors triple and device resets go up tenfold. Behaviour and failure patterns vary from one drive manufacturer to another, and even between different models of hard drive from the same vendor. The telemetry for training the machine learning system has to be collected from different kinds of workloads, because that affects how quickly the failure is going to happen: if the VM is thrashing the disk, a drive with early signs of failure will fail fairly quickly, whereas the same drive in a server with a less disk-intensive workload could carry on working for weeks or months. Azure has a similar machine-learning system that predicts failures of compute nodes. In both cases, instead of trying to definitively predict whether a specific piece of hardware is failing, the systems rank them in order of how error-prone they are. The top systems on the list stop accepting new VMs and have running VMs live-migrated off onto different nodes, and then get taken out of service for testing.



SQL Server users could already run the database themselves on Google Cloud Platform (GCP) via VMs, but Google will fully manage the upcoming service through its Cloud SQL offering, which already features PostgreSQL and MySQL. Google's managed SQL Server service will support all editions of SQL Server 2017, which also has backward compatibility with older versions of the database, said Dominic Preuss, director of product management for Google Cloud, at the Cloud Next conference here this week. AWS has offered a similar service through its Relational Database Service for years. Moreover, Microsoft has worked since 2009 on its Azure SQL managed service. Microsoft's effort has endured some fits and starts over the years. Customers that wanted to move very large SQL Server databases to the cloud had to run them on Azure's VM-based service or break them apart into multiple pieces, given Azure SQL's size limitations.


How to deal with backup when you switch to hyperconverged infrastructure

continuity data backup network server banking
Each HCI vendor offers a hardware configuration using components supported by the virtualization vendors it wishes to support. Since the system comes pre-built you can be assured that all the hardware components will work together and will work with any supported hypervisors. Any incompatibilities between the various components will be handled by the HCI vendor. Some HCI vendors also offer their own hypervisors. The best example of this would be Nutanix with their Acropolis hypervisor. Typically such a hypervisor will offer tighter integration with the HCI hardware and integrated data-protection features. Often, the built-in hypervisor is also less expensive than traditional hypervisors, especially if you take advantage of the native data-protection features. The final type of HCI vendor supports neither VMware nor Hyper-V, nor do they use their own hypervisor. Scale Computing uses the KVM hypervisor, which is open source. Like Nutanix, they do this to reduce their customers’ TCO while offering much of the same functionality that VMware offers. In addition, they also offer integrated data protection.


How AIOps Supports a DevOps World


AIOps can also automate workflows for alerts that require escalation, human attention and/or investigation. For example, alerts on devices supporting business-critical IT services require notification of Level 1 support staff within five minutes of alert receipt. If the alert is from a server and for a specific application, an IT or DevOps user will need to create an incident and route it to the relevant application team. AIOps takes care of this immediately with alert escalation workflows that help program first-response actions for notification and incident creation. Again, this can occur completely unsupervised – no human interaction required – once these policies are established. What’s more, policy-driven AIOps correlates dependencies based on downstream resources or establishes an algorithm-based correlation to address groups of alerts continuously. This drastically frees up time that is typically spent sifting through alert floods, figuring out what to do with them, and then doing it. Advanced AIOps tools use native instrumentation to determine how frequently specific alert sequences occur.


Doing continuous testing? Here's why you should use containers


As nearly every software tester has experienced, test environments are a mixed blessing. On one hand, they allow end-to-end tests that would otherwise have to be executed in production. Without a test environment, testing teams would be shipping code that hasn't been tested across functional boundaries out to users—and hoping for the best. A well-configured and maintained test environment, one that closely mimics production and contains up-to-date code deployments, can provide a safe and sane way for testers to validate a scenario before it gets into the hands of a customer. Problematically, however, test environments encourage a mode of development that is fast becoming outdated: long integration cycles, an untrustworthy main source trunk, and late-stage testing. The most productive, highest-performing engineering teams do just the opposite. They need to be able to trust that code in the main trunk could go to production at any time. They often shift left on quality, with the majority of testing happening before a code change even lands.


Kotlin Multiplatform for iOS Developers


KMP works by using Kotlin to program business logic that is common to your app's various platforms. Then, each platform's natively programmed UI calls into that common logic. UI logic must still be programmed natively in many cases because it is too platform-specific to share. In iOS this means importing a .frameworkfile - originally written in KMP - into your Xcode project, just like any other external library. You still need Swift to use KMP on iOS, so KMP is not the end of Swift.  KMP can also be introduced iteratively, so you can implement it with no disruption to your current project. It doesn't need to replace existing Swift code. Next time you implement a feature across your app's various platforms, use KMP to write the business logic, deploy it to each platform, and program the UIs natively. For iOS, that means business logic in Kotlin and UI logic in Swift. The close similarities between Swift's and Kotlin's syntax greatly reduces a massive part of the learning curve involved with writing that KMP business logic.



Quote for the day:


"To double your net worth, double your self-worth. Because you will never exceed the height of your self-image." -- Robin Sharma


Daily Tech Digest - April 15, 2019

The Staying Power of Legacy Systems

Image: Senticus - stock.adobe.com
As strange as it might seem, we migrated our environment away from these servers, and opted instead to run our Linux systems on an IBM mainframe, even though we didn’t use the IBM native zOS operating system itself,” said the CEO. “The mainframe-resident systems were able to deliver the five nines uptime we were promising our customers, and when we had problems, the vendor’s support was swift and responsive. ... If you are a new company without an investment in legacy systems, you can look at any solution in the IT marketplace, whether it is legacy or not. But for most companies, the decisions on hardware and software will come down to a “best in class” choice that considers the platforms companies are already running on, and where their companies need to be with their IT in the next 10 to 20 years. In this environment, new vendors with innovative solutions will continue to attract market share, but at the same time best of class legacy systems will continue to be attractive, because they have done anything but stand still. Most legacy systems now come in cloud as well as in in-house implementations. Most legacy systems also have provisions for integration with or add-ons for Web-facing and social media apps.



How to avoid software outsourcing problems

How to avoid software outsourcing problems image
The importance of choosing the correct outsourcing partner simply cannot be overstated. Working with an experienced and well-regarded software outsourcing company has helped many companies expand beyond their initial startup stage, rapidly adjust to market pressures, and bring custom software to the market while maintaining their agility as a growing organization. The best outsourcing partners will provide assistance through every aspect of the software development cycle, helping their clients conceptualize, execute, and bring their software to market. However, working with poor software outsourcing companies can be counterproductive. It can lead to massive cost overruns, harm company morale, and lead to numerous missed deadlines as they struggle to fix their own mistakes. In addition, all of this frustration may be for naught if the final software reflects their haphazard approach and lack of attention to detail. This article will help companies avoid these pitfalls by identifying the 8 most common outsourcing problems, as well as their solutions.


How DataOps helps organisations make better decisions


Making it easier for people to work with data is a key requirement in DataOps. Nigel Kersten, vice president of ecosystem engineering at Puppet, says: “The DataOps movement focuses on the people in addition to processes and tools, as this is more critical than ever in a world of automated data collection and analysis at a massive scale.” DataOps practitioners (DataOps engineers or DOEs) generally focus on building data governance frameworks. A good data governance framework – one that is fed and watered regularly with accurate de-duplicated data that stems from the entire IT stack – is able to help data models to evolve more rapidly. Engineers can then run reproducible tests using consistent test environments that ingest customer data in a way that complies with data and privacy regulations. The end result is a continuous and virtuous develop-test-deploy cycle for data models, says Justin Reock, chief architect at Rogue Wave, a Perforce Company. “At the core of all modern business, code is needed to transport, analyse and arrange domain data,” he says.


Artificial Intelligence: A Cybersecurity Solution or the Greatest Risk of All?

Arrangement of outlines of human brain technological and fractal elements on the subject of artificial intelligence as a cybersecurity solution
AI can also become a real headache for cybersecurity professionals around the globe. Just as security firms can use the tech to spot attacks, so can hackers in order to launch more sophisticated attack campaigns. Spear phishing is just one example out of many, as using machine learning tech can allow cybercriminals to craft more convincing messages intended to dupe the victim into giving the attacker access to sensitive information or installing malicious software. AI can even help in matching the style and content of a spear phishing campaign to its targets, as well as enhance the volume and reach of the attacks exponentially. Meanwhile, ransomware attacks are still a hot topic, especially after the WannaCry incident that reportedly cost the British National Health System a whopping £92 million in damages – £20 million during the attack, between May 12 and 19, 2018, and a further £72 million to clean and upgrade its IT networks – and meant that 19,000 healthcare appointments had to be cancelled.


Build A Strong Cybersecurity Posture With These 10 Best Practices

uncaptioned
When you plan to overhaul your cybersecurity infrastructure, it’s important to keep the weakest link in mind: the people in your organization. Yes, you should invest in the right technology that takes your network and endpoint security to the next level, but make sure your organization’s workforce is aware of the cyberthreats they face and how they must address these threats. Conduct security awareness training programs that establish a culture of cybersecurity awareness. ... When it comes to cyberattacks, it is not a matter of if they will happen, but when they will happen. Prevention is definitely better than cure, but if your organization does experience an attack, it is important to understand how it happened, how it unfolded and the vulnerabilities it was able to exploit. Root cause analysis can help you find the cause and plug key vulnerabilities. ... What if an attacker manages to fly under the radar and your resource-constrained IT team fails to identify a data breach in progress? Such disastrous consequences can be avoided if the threat gets identified proactively.


How to be an edgy CIO

world map network server data center iot edge computing
Edge computing is the delivery of computing infrastructure that exists as close to the sources of data (logical extremes of a network) designed to improve the performance, operating cost and reliability of applications and services. Edge computing reduces network hops, latency, and bandwidth constraints by distributing new resources and software stacks along the path between centralized data centers and the increasingly large number of devices in the field. By shortening the distance between devices and cloud resources that serve them, edge computing ultimately turns massive amounts of machine-based data into actionable intelligence. In particular, but not exclusively, in close proximity to the last mile network, on both the infrastructure and device sides. The word “edge” refers specifically to geographic distribution. While edge computing is a form of cloud computing, it works differently by pushing data processing to the literal “edge” devices for computing, not relying on the centralized data center to do all the work. This complementary computing system frees up bandwidth pressure since data no longer has to be constantly pushed back and forth to the data center.


Increasing trust in Google Cloud: visibility, control and automation


Your first line of defense for cloud deployments is your virtual private cloud (VPC). VPC Service Controls, now generally available, go beyond your VPC and let you define a security perimeter around specific GCP resources such as Cloud Storage buckets, Bigtable instances, and BigQuery datasets to help mitigate data exfiltration risks. As you move workloads to the cloud, you need visibility into the security state of your GCP resources. You also need to be able to identify threats and vulnerabilities so you can respond quickly. Last year, we introduced Cloud Security Command Center (Cloud SCC), a comprehensive security management and data risk platform for GCP. Cloud SCC is now generally available, offering a single pane of glass to help prevent, detect, and respond to threats across a broad swath of GCP services. As part of GA, we’re excited to announce the first set of prevention, detection, and response services that can help you uncover risky misconfigurations and malicious activity:


The Single Cybersecurity Question Every CISO Should Ask

Today, every organization – regardless of industry, size, or level of sophistication – faces one common challenge: security. Breaches grab headlines, and their effects extend well beyond the initial disclosure and clean-up. A breach can do lasting reputational harm to a business, and with the enactment of regulations such as GDPR, can have significant financial consequences. But as many organizations have learned, there is no silver bullet – no firewall that will stop threats. They are pervasive, they can just as easily come from the inside as they can from outside, and unlike your security team, who must cover every nook and cranny of the attack surface, a malicious actor only has to find one vulnerability to exploit. ... In a world in which security and IT operations are often at odds, this may seem counterintuitive, but the truth is what SecOps calls "the attack surface" is what IT ops calls "the environment." And no one knows the enterprise environment – from the data center to the cloud to the branch and device edge – better than the team tasked with building and managing it.


Capitalising on the power of modern data sharing image
While there are undoubted benefits to data sharing, for too long, organisations have relied on legacy technologies, such as outdated big data platforms or on premises data warehouses to manage their data, which have been ill-equipped to meet modern data requirements. With the number of data access points available, legacy tech has been unable to handle large datasets, especially as the velocity, variety and volume of data continues to grow. Simple querying of data would take days or even weeks to generate on traditional on premises technology, posing a real issue in getting immediate answers. This has meant that while internal data is easier to access, external data has been far more difficult. Thankfully, the birth of cloud-built data warehouses is helping alleviate much of these struggles and helping organisations capitalise on the data sharing economy. This fits hand-in-hand with the natural progression for organisations’ growing adoption of cloud infrastructures, with 85% of organisations expected to adopt cloud technologies by 2020 — according to a survey from McAfee.


Build a Monolith before Going for Microservices: Jan de Vries at MicroXchg Berlin

Designing a system using one silo or service for each business function is what De Vries prefers, which means that each function becomes a command or request handler handling everything needed for the function. Often there is a need for services to share some data, but instead of using synchronous calls between services, he recommends sending messages using some type of message bus. Then each service can read the messages it needs irrespective of which service is sending them. One benefit from isolating different parts like this is that they can use different types of technological stacks and data storages depending on the need. De Vries points out though that just because you can, it doesn’t mean you must. He is a proponent for keeping it simple and prefers using one single technological stack, unless there is a good reason to step out to something else. If you aren’t sharing any business logic you will probably end up with a lot of duplicated code. We have been taught that duplicated code is bad (DRY); instead, we should abstract the duplication away in some way.



Quote for the day:


"To know what people really think, pay regard to what they do, rather than what they say." -- René Descartes


Daily Tech Digest - April 14, 2019

Ten big global challenges technology could solve


Renewable energy sources like wind and solar are becoming cheap and more widely deployed, but they don’t generate electricity when the sun’s not shining or wind isn’t blowing. That limits how much power these sources can supply, and how quickly we can move away from steady sources like coal and natural gas. The cost of building enough batteries to back up entire grids for the days when renewable generation flags would be astronomical. Various scientists and startups are working to develop cheaper forms of grid-scale storage that can last for longer periods, including flow batteries or tanks of molten salt. ... Pandemic flu is rare but deadly. At least 50 million people died in the 1918 pandemic of H1N1 flu. More recently, about a million people died in the 1957-’58 and 1968 pandemics, while something like half a million died in a 2009 recurrence of H1N1. The recent death tolls are lower in part because the viruses were milder strains. We might not be so lucky next time—a particularly potent strain of the virus could replicate too quickly for any tailor-made vaccine to effectively fight it.



Being an effective cybersecurity leader amid increasing pressure, expectations and threats

Being an effective cybersecurity leader means helping your staff avoid the burnout, guilt, and depression that comes from not getting the headcount needed, the funding for the new project, or worse yet, experiencing a data breach when the inevitable comes to pass. To lead effectively, you as a leader need to employ the principle of ensuring informed decisions happen and residual risk is accounted for and governed. The business doesn’t have to invest in every security solution available (in fact, doing so may impede their ability to effectively operate), so long as you have appropriately informed stakeholders of the bad outcomes that could come to pass from not choosing the more secure option, and having them accept the risk associated with such bad outcomes. Risk acceptance is the cybersecurity leader’s “get out of jail free” card – not in an “I told you so” way, but in a cooperative manner that helps the business view you as a partner, not an impediment, and the cybersecurity staff feel as though their concerns have been addressed.


Taking Sustainability a step further – Marginal Gains


Virtualisation and containerisation is the first step, but they talked in terms of using the whole chain of IT as a process with software defined architecture. You should be paying only for what you use, what you need. Interestingly, with their Greenlake product, that extends the OpEx pay-as-you-go consumption-based approach to on-premise hardware. That, in turn, extends HPE’s hybrid-cloud credentials and means better cashflow for their customers, and the ability to manage the peaks more easily. Capacity on demand in your data centre, as well as the public cloud. This approach to infrastructure goes hand in hand with the shift in focus of data and processing moving to the edge, where we need solutions that provide compute power at or near the source of where the data is generated by a mobile device, a machine on the shop floor or a sensor. This is vital for supporting IoT, for the requirements of autonomous vehicles in the field, or the needs of the smart city. Gartner predicts that 75% of data will computed at the edge rather than in the data centre by 2025, and maybe it’s coming even sooner than that!


Q&A on the Book What’s Your Digital Business Model

The first type of disruption happens when a new entrant—often a start-up like Airbnb—comes into an existing market and offers an exciting new value proposition. In banking, for instance, fin-tech start-ups have gone after profitable parts of banking’s business, like payments and loans. The second form of disruption comes from a traditional competitor within your industry, but that organization changes its business model to become a much more formidable competitor. For example, Nordstrom has evolved from a traditional department store into an attractive omni-channel business, combining the best of place and space. In our research we see industries like banking, insurance, retail and energy companies trying to find the perfect mix of place and space. The third form of disruption involved crossing industry boundaries. It’s what happens when challengers come from completely outside of your industry. For instance, Australian supermarket chain Coles has started selling home insurance, as well as offering other financial services.


6 Innovative Cities Encouraging Tech Innovation

cities
As of March, Shanghai became the world’s first district to use both 5G and a broadband gigabit network. Shanghai’s vice mayor Wu Qing made the network’s first 5G video call using a Huawei Mate X smartphone, which is the company’s first 5G foldable phone. The city’s ambitious project, which aims to build over 10,000 5G base stations by the end of the year, is backed by state run telco China Mobile. Shanghai has been dubbed ‘China’s Silicon Valley’, and is home to the likes of Tencent, Huawei and ZTE. Another recent development is the creation of the Shanghai Technology Innovation Board, created by the government to discover and nurture promising companies in a bid to compete with US tech giants. ... Otherwise known as the Motor City, Detroit has become almost synonymous with mobility solutions. The city is following in the footsteps of other US cities by taking its existing network of automakers and facilitating work on innovative travel and transport. Ford and General Motors are just two of the auto giants based in Detroit, both of which are steering towards autonomous vehicles. In 2015, the Mayor’s Office of International Affairs was created to attract foreign investment and nurture homegrown startups.


The Connection Between Strategy And Enterprise Architecture

Business capabilities are the link connecting the strategy and business model to the enterprise architecture and the underlying technology that executes the strategy. Understanding this link enables a company to align resources, people, and processes to transform itself in response to market dynamics, thereby maintaining a competitive edge. ... A business model is a description of how an enterprise creates and captures value. It describes the customer value proposition. How the company will organize its resources and partner network to produce that value. And how it will structure its revenue streams and cost structure to fund the operations and capture value to its stakeholders. An organization can be described through the nine elements or building blocks of the business model canvas. The business model canvas helps you describe, map, discuss, design, and invent new business models.



Codementor, a startup that connects developers with questions to developers with answers, has attempted to narrow those choices down by creating a list of the worst languages to learn. The 'worst-to-best' ranking creates scores using community engagement, growth, and the job market to determine the list.  Last year the company ruled that Dart, Objective-C, CoffeeScript, Lua, and Erlang were the top five languages not worth learning. This year Codementor focused on "which languages you probably should not learn as a first programming language". For this reason, it excluded the top three most popular languages, including JavaScript, Python, and Java.  The company's data suggests the languages to not bother learning this year are Elm, CoffeeScript, Erlang, and Perl.  Somewhat surprisingly, Kotlin, a popular language for building Android apps, rose from 18th to 11th place on Codementor's worst-to-best list. Microsoft-owned code-hosting site GitHub crowned it the fastest-growing language of 2018 due to the massive growth in projects written in Kotlin.


What Does It Mean To Be A Data-Driven Enterprise Today?

It’s no secret that AI and machine learning have become the top wish-list items among CIOs and CEOs. However, the real potential of AI can be reached only if your organization’s data, which AI relies on, is accurate and business-relevant. You need to trust the source of the data being used to feed AI programs, and the data must be governed properly across the organization. This fundamental piece of the AI and machine-learning puzzle is critical for allowing AI and machine-learning technologies to “learn” how to evolve intelligence and make smarter recommendations for the business. It is also based on the premise that knowledge from the past and present must be preserved, as it ensures valuable reuse and time to market. We’ve seen many examples of CEOs struggling to understand which versions of their data are accurate due to poor data quality and governance. These companies need to establish a trusted source from which their data is managed through best-practice, automated governance, including standardizing data definitions and rules – part numbers, terminology, and so on. 


Enterprise vs. Solution vs. Infrastructure: Understanding the Different Technology Architectures

Enterprise vs. Solution vs. Infrastructure: Understanding the Different Technology Architectures
Enterprise architecture (EA) aligns your organization's IT infrastructure with your overall business goals. It shows you how your technology, information, and business flow together to achieve goals. EA allows for analysis, design, planning, and implementation at an enterprise level. It perceives industry trends and navigates disruptions using a specific set of principles known as enterprise architectural planning (EAP). ... Solution architecture (SA) describes the architecture of a technological solution. It uses different perspectives including information, technical, and business. It also considers the solution from the point of EA. Enterprise architects are best known for taking the "50,000-foot view" of a project. A solutions architect zones in on the details.  ... Infrastructure architecture refers to the sum of the company's hardware and IT capability. Achieving synergy between all the devices is its overarching goal. In the past, infrastructure architecture was the focal point for security. Today, it goes further. It's a structured approach for modeling an enterprise's hardware elements.


Riding the K-wave: Disruptive innovation in the age of sustainability

Disruptive innovation happens more than we realize, and a good question is why we’re routinely late to realize its effects. Schumpeter got some of his insights from the work of a Russian economist, Nicolai Kondratiev, who was an advisor to Vladimir Lenin. Kondratiev’s job was explaining capitalism to the Bolsheviks. During Lenin’s life, Kondratiev had a respected position in academic economics; when Stalin came to power after Lenin’s death, Kondratiev’s theories didn’t fit Stalin’s world view and he was liquidated. Kondratiev observed that really big disruptive innovations begin a 50- to 60-year economic cycle that Schumpeter later called K-waves in his honor. The idea of a K-wave is simple: For the first 25 to 30 years after its introduction, a technological disruption expands the economy, creating jobs and whole industries as massive amounts of capital flow into that new industry. The reason for expansion is simple. Very often a disruption is in high demand, but it requires some form of construction to diffuse the innovation throughout society.



Quote for the day:


"Leaders know the importance of having someone in their lives who will unfailingly and fearlessly tell them the truth." -- Warren G. Bennis