Daily Tech Digest - April 15, 2019

The Staying Power of Legacy Systems

Image: Senticus - stock.adobe.com
As strange as it might seem, we migrated our environment away from these servers, and opted instead to run our Linux systems on an IBM mainframe, even though we didn’t use the IBM native zOS operating system itself,” said the CEO. “The mainframe-resident systems were able to deliver the five nines uptime we were promising our customers, and when we had problems, the vendor’s support was swift and responsive. ... If you are a new company without an investment in legacy systems, you can look at any solution in the IT marketplace, whether it is legacy or not. But for most companies, the decisions on hardware and software will come down to a “best in class” choice that considers the platforms companies are already running on, and where their companies need to be with their IT in the next 10 to 20 years. In this environment, new vendors with innovative solutions will continue to attract market share, but at the same time best of class legacy systems will continue to be attractive, because they have done anything but stand still. Most legacy systems now come in cloud as well as in in-house implementations. Most legacy systems also have provisions for integration with or add-ons for Web-facing and social media apps.



How to avoid software outsourcing problems

How to avoid software outsourcing problems image
The importance of choosing the correct outsourcing partner simply cannot be overstated. Working with an experienced and well-regarded software outsourcing company has helped many companies expand beyond their initial startup stage, rapidly adjust to market pressures, and bring custom software to the market while maintaining their agility as a growing organization. The best outsourcing partners will provide assistance through every aspect of the software development cycle, helping their clients conceptualize, execute, and bring their software to market. However, working with poor software outsourcing companies can be counterproductive. It can lead to massive cost overruns, harm company morale, and lead to numerous missed deadlines as they struggle to fix their own mistakes. In addition, all of this frustration may be for naught if the final software reflects their haphazard approach and lack of attention to detail. This article will help companies avoid these pitfalls by identifying the 8 most common outsourcing problems, as well as their solutions.


How DataOps helps organisations make better decisions


Making it easier for people to work with data is a key requirement in DataOps. Nigel Kersten, vice president of ecosystem engineering at Puppet, says: “The DataOps movement focuses on the people in addition to processes and tools, as this is more critical than ever in a world of automated data collection and analysis at a massive scale.” DataOps practitioners (DataOps engineers or DOEs) generally focus on building data governance frameworks. A good data governance framework – one that is fed and watered regularly with accurate de-duplicated data that stems from the entire IT stack – is able to help data models to evolve more rapidly. Engineers can then run reproducible tests using consistent test environments that ingest customer data in a way that complies with data and privacy regulations. The end result is a continuous and virtuous develop-test-deploy cycle for data models, says Justin Reock, chief architect at Rogue Wave, a Perforce Company. “At the core of all modern business, code is needed to transport, analyse and arrange domain data,” he says.


Artificial Intelligence: A Cybersecurity Solution or the Greatest Risk of All?

Arrangement of outlines of human brain technological and fractal elements on the subject of artificial intelligence as a cybersecurity solution
AI can also become a real headache for cybersecurity professionals around the globe. Just as security firms can use the tech to spot attacks, so can hackers in order to launch more sophisticated attack campaigns. Spear phishing is just one example out of many, as using machine learning tech can allow cybercriminals to craft more convincing messages intended to dupe the victim into giving the attacker access to sensitive information or installing malicious software. AI can even help in matching the style and content of a spear phishing campaign to its targets, as well as enhance the volume and reach of the attacks exponentially. Meanwhile, ransomware attacks are still a hot topic, especially after the WannaCry incident that reportedly cost the British National Health System a whopping £92 million in damages – £20 million during the attack, between May 12 and 19, 2018, and a further £72 million to clean and upgrade its IT networks – and meant that 19,000 healthcare appointments had to be cancelled.


Build A Strong Cybersecurity Posture With These 10 Best Practices

uncaptioned
When you plan to overhaul your cybersecurity infrastructure, it’s important to keep the weakest link in mind: the people in your organization. Yes, you should invest in the right technology that takes your network and endpoint security to the next level, but make sure your organization’s workforce is aware of the cyberthreats they face and how they must address these threats. Conduct security awareness training programs that establish a culture of cybersecurity awareness. ... When it comes to cyberattacks, it is not a matter of if they will happen, but when they will happen. Prevention is definitely better than cure, but if your organization does experience an attack, it is important to understand how it happened, how it unfolded and the vulnerabilities it was able to exploit. Root cause analysis can help you find the cause and plug key vulnerabilities. ... What if an attacker manages to fly under the radar and your resource-constrained IT team fails to identify a data breach in progress? Such disastrous consequences can be avoided if the threat gets identified proactively.


How to be an edgy CIO

world map network server data center iot edge computing
Edge computing is the delivery of computing infrastructure that exists as close to the sources of data (logical extremes of a network) designed to improve the performance, operating cost and reliability of applications and services. Edge computing reduces network hops, latency, and bandwidth constraints by distributing new resources and software stacks along the path between centralized data centers and the increasingly large number of devices in the field. By shortening the distance between devices and cloud resources that serve them, edge computing ultimately turns massive amounts of machine-based data into actionable intelligence. In particular, but not exclusively, in close proximity to the last mile network, on both the infrastructure and device sides. The word “edge” refers specifically to geographic distribution. While edge computing is a form of cloud computing, it works differently by pushing data processing to the literal “edge” devices for computing, not relying on the centralized data center to do all the work. This complementary computing system frees up bandwidth pressure since data no longer has to be constantly pushed back and forth to the data center.


Increasing trust in Google Cloud: visibility, control and automation


Your first line of defense for cloud deployments is your virtual private cloud (VPC). VPC Service Controls, now generally available, go beyond your VPC and let you define a security perimeter around specific GCP resources such as Cloud Storage buckets, Bigtable instances, and BigQuery datasets to help mitigate data exfiltration risks. As you move workloads to the cloud, you need visibility into the security state of your GCP resources. You also need to be able to identify threats and vulnerabilities so you can respond quickly. Last year, we introduced Cloud Security Command Center (Cloud SCC), a comprehensive security management and data risk platform for GCP. Cloud SCC is now generally available, offering a single pane of glass to help prevent, detect, and respond to threats across a broad swath of GCP services. As part of GA, we’re excited to announce the first set of prevention, detection, and response services that can help you uncover risky misconfigurations and malicious activity:


The Single Cybersecurity Question Every CISO Should Ask

Today, every organization – regardless of industry, size, or level of sophistication – faces one common challenge: security. Breaches grab headlines, and their effects extend well beyond the initial disclosure and clean-up. A breach can do lasting reputational harm to a business, and with the enactment of regulations such as GDPR, can have significant financial consequences. But as many organizations have learned, there is no silver bullet – no firewall that will stop threats. They are pervasive, they can just as easily come from the inside as they can from outside, and unlike your security team, who must cover every nook and cranny of the attack surface, a malicious actor only has to find one vulnerability to exploit. ... In a world in which security and IT operations are often at odds, this may seem counterintuitive, but the truth is what SecOps calls "the attack surface" is what IT ops calls "the environment." And no one knows the enterprise environment – from the data center to the cloud to the branch and device edge – better than the team tasked with building and managing it.


Capitalising on the power of modern data sharing image
While there are undoubted benefits to data sharing, for too long, organisations have relied on legacy technologies, such as outdated big data platforms or on premises data warehouses to manage their data, which have been ill-equipped to meet modern data requirements. With the number of data access points available, legacy tech has been unable to handle large datasets, especially as the velocity, variety and volume of data continues to grow. Simple querying of data would take days or even weeks to generate on traditional on premises technology, posing a real issue in getting immediate answers. This has meant that while internal data is easier to access, external data has been far more difficult. Thankfully, the birth of cloud-built data warehouses is helping alleviate much of these struggles and helping organisations capitalise on the data sharing economy. This fits hand-in-hand with the natural progression for organisations’ growing adoption of cloud infrastructures, with 85% of organisations expected to adopt cloud technologies by 2020 — according to a survey from McAfee.


Build a Monolith before Going for Microservices: Jan de Vries at MicroXchg Berlin

Designing a system using one silo or service for each business function is what De Vries prefers, which means that each function becomes a command or request handler handling everything needed for the function. Often there is a need for services to share some data, but instead of using synchronous calls between services, he recommends sending messages using some type of message bus. Then each service can read the messages it needs irrespective of which service is sending them. One benefit from isolating different parts like this is that they can use different types of technological stacks and data storages depending on the need. De Vries points out though that just because you can, it doesn’t mean you must. He is a proponent for keeping it simple and prefers using one single technological stack, unless there is a good reason to step out to something else. If you aren’t sharing any business logic you will probably end up with a lot of duplicated code. We have been taught that duplicated code is bad (DRY); instead, we should abstract the duplication away in some way.



Quote for the day:


"To know what people really think, pay regard to what they do, rather than what they say." -- René Descartes


No comments:

Post a Comment