Daily Tech Digest - August 07, 2019

Diligent Engine: A Modern Cross-Platform Low-Level Graphics Library

Image 13 for Diligent Engine: A Modern Cross-Platform Low-Level Graphics Library
The next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard. New APIs can provide substantial performance and functional improvements, but may not be supported by older platforms. An application targeting wide range of platforms has to support Direct3D11 and OpenGL. New APIs will not give any advantage when used with old paradigms. It is totally possible to add Direct3D12 support to an existing renderer by implementing Direct3D11 interface through Direct3D12, but this will give zero benefits. Instead, new approaches and rendering architectures that leverage flexibility provided by the next-generation APIs are expected to be developed. There exist at least four APIs (Direct3D11, Direct3D12, OpenGL/GLESplus, Vulkan, plus Apple's Metal for iOS and osX platforms) that a cross-platform 3D application may need to support. Writing separate code paths for all APIs is clearly not an option for any real-world application and the need for a cross-platform graphics abstraction layer is evident.


"The whole goal of IAM is to make it a whole lot simpler for the user, rather than having to log on and configure access on thousands of different applications," Johnson said. "And the person on the user- or employee-enablement side of the house is really thinking about, 'How can I implement all this permissioning in a way that makes the users' lives easier, not harder?'" Organizations can run into trouble with IAM, however, when the right hand doesn't know what the left is doing, Johnson added. While the CISO and cybersecurity team might operate under false assurance that person X does not have access to resource Y, for example, someone in employee enablement might have in fact granted that access -- unaware of the security implications at play. "Then, when something bad occurs, the board might say, 'How could this happen?'" Johnson said. 


Microsoft finds Russia-backed attacks that exploit IoT devices  

Russian hammer and sickle / binary code
Devices compromised in this way acted as back doors to secured networks, allowing the attackers to freely scan those networks for further vulnerabilities, access additional systems, and gain more and more information. The attackers were also seen investigating administrative groups on compromised networks, in an attempt to gain still more access, as well as analyzing local subnet traffic for additional data. STRONTIUM, which has also been referred to as Fancy Bear, Pawn Storm, Sofacy and APT28, is thought to be behind a host of malicious cyber-activity undertaken on behalf of the Russian government, including the 2016 hack of the Democratic National Committee, attacks on the World Anti-Doping Agency, the targeting of journalists investigating the shoot-down of Malaysia Airlines Flight 17 over Ukraine, sending death threats to the wives of U.S. military personnel under a false flag and much more. According to an indictment released in July 2018 by the office of Special Counsel Robert Mueller, the architects of the STRONTIUM attacks are a group of Russian military officers, all of whom are wanted by the FBI in connection with those crimes.


Privacy Attacks on Machine Learning Models


This type of attack is called a Membership Inference Attack (MIA), and it was created by Professor Reza Shokri, who has been working on several privacy attacks over the past four years. In his paper Membership Inference Attacks against Machine Learning Models, which won a prestigious privacy award, he outlines the attack method. First, adequate training data must be collected from the model itself via sequential queries of possible inputs or gathered from available public or private datasets that the attacker has access to. Then, the attacker builds several shadow models -- which should mimic the model (i.e. take similar inputs and outputs of the target model). These shadow models should be tuned for high precision and recall on samples of the training data that was collected. Note: the attack aims to have different training and testing splits for each shadow model, so you must have enough data to perform this step.


Unsupervised learning explained

Unsupervised learning explained
Think about how human children learn. As a parent or teacher you don’t need to show young children every breed of dog and cat there is to teach them to recognize dogs and cats. They can learn from a few examples, without a lot of explanation, and generalize on their own. Oh, they might mistakenly call a Chihuahua “Kitty” the first time they see one, but you can correct that relatively quickly. Children intuitively lump groups of things they see into classes. One goal of unsupervised learning is essentially to allow computers to develop the same ability. As Alex Graves and Kelly Clancy of DeepMind put it in their blog post, “Unsupervised learning: the curious pupil,” ... Mixture models assume that the sub-populations of the observations correspond to some probability distribution, commonly Gaussian distributions for numeric observations or categorical distributions for non-numeric data. Each sub-population may have its own distribution parameters, for example mean and variance for Gaussian distributions. Expectation maximization (EM) is one of the most popular techniques used to determine the parameters of a mixture with a given number of components.


Mobile-Only Bank Monzo Warns 480,000 Customers to Reset PINs

Mobile-Only Bank Monzo Warns 480,000 Customers to Reset PINs
Monzo reports that PINs are supposed to be encrypted and stored in the bank's internal systems with limited access, but because a bug allowed the PINs to be stored in plaintext, more employees could have accessed them. The software bug has since been fixed, the company reports. "If we've contacted you to tell you that you've been affected, you should head to a cash machine to change your PIN to a new number as a precaution," according to the company's blog. So far, Monzo's investigation hasn't turned up any cases of fraud stemming from the unsecured PINs, and no one from outside the bank apparently accessed the data, according to the bank's statement. A spokesperson for the company did not immediately reply to Information Security Media Group's request for comment. The Guardian reports, however, that this security vulnerability has persisted for at least the last six months, and that the incident has been referred to the U.K. Information Commissioner's Office, which is Britain's watchdog agency for consumer privacy issues.


CIOs In Banking And Financial Firms Still Grappling With Cybersecurity

CIOs in banking and financial firms still grappling with cybersecurity - CIO&Leader
When it comes to cybersecurity awareness and practices, CIOs in the banking and financial services industry are at a much higher maturity curve than their peers. Despite their awareness and concerns about online threats, a new study found that banking organizations are struggling to manage cybersecurity risks, with many CIOs acknowledging that they are still not doing enough to protect their systems, networks and data. The Synopsis report, based on a survey of CIOs and IT security practitioners from global financial services organizations conducted by Ponemon Institute, found that more than half of these firms have experienced theft of sensitive customer data or system failure and downtime because of insecure software or technology. Besides, the study shows, banking and financial firms’ CIOs are struggling to manage cybersecurity risk in their supply chain and are failing to assess their software for security vulnerabilities before release. “While the financial services industry is relatively mature in terms of its software security posture, organizations are grappling with a rapidly evolving technology landscape and facing increasingly sophisticated adversaries,” says Drew Kilbourne


Building Maintainable Software Systems

To keep the code clean and maintainable one can use clean architecture principles. There’s a whole book on that by Robert Martin and the acronym SOLID to go with it, but here I’m going to simplify it as separating what the system does from how it does it, and that the “what” is not dependent on the “how.” What the system does at its core is the domain and the use cases that surround it. How the system does it, relates to its infrastructure, presentation and configuration. ... A key point in determining which architecture, code organization, language, framework, etc. to use is the ability to justify your decisions. If you can’t justify the decision(s), then you are taking a chance that it will just work out. A better approach might be to first justify the decision to yourself, so that you can later justify it before others. A good way is to record those decisions, for example, by using Architecture Decision Records. Writing down your decision(s) helps you identify if it really makes sense, but it also benefits those coming after you to understand why the system is in its current state.


Best mobile device security policy for loss or theft


The first step to develop a reasonable response procedure for a stolen or lost work phone is to acknowledge what's at stake. Today's business smartphones and sometimes even tablets store a huge amount of information and access, so IT must address lost or stolen devices as serious threats. ... Another way IT can reduce the damage of a lost work phone is to ensure that users are on board with a mobile device security policy and established best practices. Users must know the exact steps to follow once the loss or theft occurs, such as how to report a lost device and how to help locate it. IT professionals may have listed or documented these steps in a manual, but they must communicate the process to users as well. Finally, IT must evaluate existing controls and processes for lost mobile devices. IT professionals can run tests for these policies on a one-off basis every year via a survey or in a one-on-one meeting


Facial recognition… coming to a supermarket near you

Facial Recognition Technology<br>Facial Recognition System, Concept Images. Portrait of young man.
As with all algorithmic assessment, there is reasonable concern about bias. No algorithm is better than its dataset, and – simply put – there are more pictures of white people on the internet than there are of black people. “We have less data on dark-skinned people,” says Pantic. “Large databases of Caucasian people, not so large on Chinese and Indian, desperately bad on people of African descent.” Davis says there is an additional problem, that darker skin reflects less light, providing less information for the algorithms to work with. For these two reasons algorithms are more likely to correctly identify white people than black people. “That’s problematic for stop and search,” says Davis. Silkie Carlo, the director of the not-for-profit civil liberties organisation Big Brother Watch, describes one situation where an 14-year-old black schoolboy was “swooped by four officers, put up against a wall, fingerprinted, phone taken, before police realised the face recognition had got the wrong guy”. That said, the Facewatch facial-recognition system is, at least on white men under the highly controlled conditions of their office, unnervingly good. Nick Fisher, Facewatch’s CEO, showed me a demo version; he walked through a door and a wall-mounted camera in front of him took a photo of his face



Quote for the day:


"Leaders make decisions that create the future they desire." -- Mike Murdock


Daily Tech Digest - August 06, 2019

Evolution of the internet: Celebrating 50 years since Arpanet

NW_internet 50th anniversary2
Daily traffic on the Internet surpassed 3 million packets in 1974. First measured in terabytes and petabytes, monthly traffic volume is now measured in exabytes, which is 1018 bytes. In 2017, the annual run rate for global IP traffic was 122 exabytes per month, or 1.5 zettabytes per year, according to Cisco’s Visual Networking Index. Annual global IP traffic will reach 396 exabytes per month, or 4.8 zettabytes per year, by 2022, Cisco predicts. As traffic volume has grown, so too has the number of devices connected to the internet. Today, the number of devices connected to IP networks is approaching 20 billion. By 2022, there will be 28.5 billion networked devices, up from 18 billion in 2017, Cisco predicts. That’s more than the number of people in the world. Overall, Cisco predicts there will be 3.6 networked devices per person by 2022, up from 2.4 in 2017. Today, smartphone traffic continues to grow and is poised to exceed PC traffic in the coming years. In 2018, PCs accounted for 41% of total IP traffic, but by 2022 PCs will account for only 19 percent of IP traffic, according to Cisco’s data. Smartphones will account for 44 percent of total IP traffic by 2022, up from 18% in 2017.


Why Every Developer Should Know a Bit of Technical Writing

First, technical writing can help you communicate more easily with your teammates. If you’re collaborating with other software developers on a regular basis, you know the importance of exchanging ideas, ensuring you’re working for the same high-level goals, and solving problems together. Technical writing abilities help you formally structure these bits of communication so your coworkers can better understand them; with an efficiently written message, you can avoid most misconceptions and ultimately work faster. You can also use your technical writing abilities to communicate with out-groups more efficiently, especially if those groups have limited technical knowledge. Rather than using terms unique to the development field, or describing code directly, you’ll have to find high-level ways to describe the challenges you’re facing, or use metaphors so that other people can grasp what you’re saying. Either way, you’ll be more valuable in client meetings, and you’ll be able to talk to account managers and team leaders in other departments in a way that makes sense to them, while still conveying what you need to convey.


Are developers honestly happy working 60-hour weeks?


The annual Stack Overflow survey is one of the most comprehensive snapshots of how programmers work, with this year's poll being taken by almost 90,000 developers across the globe. Commenting on the data, Robert Pozen, senior lecturer for technological innovation, entrepreneurship, and strategic Management at MIT Sloan School of Management, said although many "white-collar professionals" are content to work for longer than the standard 40-hour week, working hours can only be extended so far before it will negatively affect them. "Many professionals are quite happy working 40 to 55 hours per week," he says. "But if professionals work for 70 to 80 hours per week on a regular basis, their productivity will gradually deteriorate on average. They will lose focus, and the long work hours will undermine the rest of their lives. "Of course, professionals can have fruitful work spurts on projects they like or think are important. But that is the exception, rather than the rule." For developers, that fall in productivity is often mapped to an increase in poor quality and buggy code that will need to be fixed at some point, actually costing companies more in the long run.


What Millennials Think Of Boomers & Vice Versa

As with many misunderstandings at work, generational or otherwise, it’s always a good idea to take a step back and look for the upsides. Downsides are easy to find. (It’s why there are so many misunderstandings!) So the next time you find yourself looking across the generational divide with misgivings, here are some upsides to keep in mind about all the generations. Millennials owe a debt of gratitude to Gen X-ers for bringing a new generational identity to the workplace, one in which self-sufficiency and resourcefulness are highly valued, along with minimal management and maximum independence. This, combined with a bit of Gen X cynicism, paved the way for the Millennial perspective. Other Millennial advantages come from the time in history in which they grew up. For example, I’ve been surprised repeatedly by the exposure to other cultures that young people in this generation have had — high school students who spend a summer studying in South Korea, college students who opt for a gap year in Hungary, or who head to Ghana to work construction.


Evolution in action: How datacentre hardware is moving to meet tomorrow’s tech challenges


A demonstration system used separate memory and compute “bricks” (plus accelerator bricks based on GPUs or FPGAs) interconnected by a switch matrix. Another example was HPE’s experimental The Machine. This was built from compute nodes containing a CPU and memory, but instead of being connected directly together, the CPU and memory were connected through a switch chip that also linked to other nodes via a memory fabric. That memory fabric was intended to be Gen-Z, a high-speed interconnect using silicon photonics being developed by a consortium including HPE. But this has yet to be used in any shipping products, and the lack of involvement by Intel casts doubts over whether it will ever feature in mainstream servers. Meanwhile, existing interconnect technology is being pushed faster. Looking at the high performance computing (HPC) world, we can see that the most powerful systems are converging on interconnects based on one of two technologies: InfiniBand or Ethernet.


Developers Are More Remote-Based, Company Connected & Burnt Out


Remote work is the new normal for developers. It's not only something they prefer, but something they increasingly demand from employers. Eighty-six percent of respondents currently work remotely in some capacity, with nearly 1/3 working from home full time. Forty-three percent say the ability to work remotely is a must-have when considering an offer with a company. The traditional narrative of remote workers as isolated and disengaged from their companies is proving false for many. Seventy-one percent of developers who work remotely said they feel connected to their company’s community. But the issue hasn’t disappeared entirely. The twenty-nine percent who don’t feel connected say they feel excluded from offline team conversations or don’t feel integrated into their company’s culture when working remotely. The burnout problem is real. Two-thirds of all respondents said their stress levels have caused them to feel burnt out or work fatigued, regardless of whether or not they work remotely. Developers expect remote work to improve work-life balance. But the reality doesn’t always line up with that hope.


Think beyond tick-box compliance


According to Holt, compliance, alongside the need to recognise and leverage the business value of data, are data control challenges. In her experience, viewing them in this way makes the alignment of business and compliance objectives much less of a problem. “Organisations can begin to identify existing use cases and processes that depend on this control, and form interdisciplinary teams involving stakeholders from both compliance and other business roles to collaborate on shared outcomes and objectives. From this comes shared processes and workflows, shared technology, and – to some extent – shared budgets. By intertwining compliance goals within the broader enterprise initiative for data control and value realisation, there’s the potential for compliance to cease being a cost centre over time,” says Holt. “Benefits, such as improved customer relations and consumer trust, provide ‘softer’ returns that are often difficult to quantitatively measure over a short-term period, but can be significant and should not be neglected in calculations,” she adds.


The Phantom Menace in Unit Testing

Let me state up front that this is not a rant about unit testing; unit tests are critically important elements of a robust and healthy software implementation. Instead, it is a cautionary tale about a small class of unit tests that may deceive you by seeming to provide test coverage but failing to do so. I call this class of unit tests phantom tests because they return what are, in fact, correct results but not necessarily because the system-under-test (SUT) is doing the right thing or, indeed, doing anything. In these cases, the SUT “naturally” returns the expected value, so doing (a) the correct thing, (b) something unrelated, or even (c) nothing, would still yield a passing test. If the SUT is doing (b) or (c), then it follows that the test is adding no value. Moreover, I submit that the presence of such tests is often deleterious, making you worse off than not having them because you think you have coverage when you do not. When you then go to make a change to the SUT supposedly covered by that test, and the test still passes, you might blissfully conclude that your change did not introduce any bugs to the code, so you go on your merry way to your next task.


Evaluate the COBIT framework 2019 update


ISACA updated every part of the COBIT framework for 2019. The changes and additions to COBIT 2019 are encapsulated within the COBIT document suite, which is available to ISACA members for free. The principal changes include a new publication within the core framework, several new objectives, security practices updates and updated references to other standards, guidelines and regulations. Four core publications express the COBIT framework. The introduction and methodology publication provides definitions, explains management objectives and lays out the COBIT framework's structure. The governance and management objectives publication details the COBIT model and all constituent governance and management objectives, each associated with a specific process. A design publication, which is new in COBIT 2019, offers practical and prescriptive guidance that enables adopters to put COBIT into practice within the specific needs of their organizations.


Lessons Learned From A Year Of Testing Web Platform

Certain kinds of failures had side-effects that we didn’t anticipate. Even though our fancy automatic recovery mechanisms kicked in, the workers were doomed to fail all subsequent attempts. That’s because the unexpected side-effects persisted across independent work orders. The most common explanation will be familiar to desktop computer users: the machines ran out of disk space. From overflowing logs and temporary web browser profiles, to outdated operating system files and discarded test results, the machines had a way of accumulating useless cruft. It wasn’t just storage, though. Sometimes, the file system persisted faulty state. This entire class of problem can be addressed by avoiding state. This is a core tenet in many of today’s popular web application deployment strategies. The “immutable infrastructure” pattern achieves this by operating in terms of machine images and recovering from failure by replacing broken deployments with brand new ones. The “serverless” pattern does away with the concept of persistence altogether, which can make sense if the task is small enough.



Quote for the day:


"If you want extraordinary results, you must put in extraordinary efforts." -- Cory Booker


Daily Tech Digest - August 05, 2019

Is your enterprise software committing security malpractice?

Enterprise software may also be enterprise spyware
An analytics firm called ExtraHop examined the networks of its customers and found that their security and analytic software was quietly uploading information to servers outside of the customer's network. The company issued a report and warning last week. ExtraHop deliberately chose not to name names in its four examples of enterprise security tools that were sending out data without warning the customer or user. A spokesperson for the company told me via email, “ExtraHop wants the focus of the report to be the trend, which we have observed on multiple occasions and find alarming. Focusing on a specific group would detract from the broader point that this important issue requires more attention from enterprises.” ... In every case, ExtraHop provided evidence that the software was transmitting data offsite. In one case, a company noticed that approximately every 30 minutes, a network-connected device was sending UDP traffic out to a known bad IP address. The device in question was a Chinese-made security camera that was phoning home to a known ​malicious IP address​ with ties to China.


istock 1124569135
Continuous Testing is commonly lumped together with “shift left.” However, to deliver the right feedback to the right stakeholder at the right time, Continuous Testing needs to occur throughout the software delivery lifecycle – and even beyond that to production (e.g., monitoring information from production and feeding that back from the quality perspective). Just as the name indicates, Continuous Testing involves testing continuously. Simply starting and finishing testing earlier is not, by definition, Continuous Testing. How do you reach this level of continuous quality and Continuous Testing? The path forward is different for every team. Some might focus on automating traditionally manual processes while others might wrestle with orchestrating and correlating all the various test automation tools they’ve come to master. The challenge is getting to the point where you can report on whether an overarching application or project involving all these different teams – with different cadences, architectures, tool stacks, structures, and challenges – has an acceptable level of risk.



Most UK university applicants at risk of email fraud


Setting Dmarc policies to “reject” is the only guaranteed way of preventing email spoofing, which has long been blamed for fraud victims being duped by social engineering techniques. Opting to set to set the policy to “none” will merely alert the domain owner of potentially suspicious activity, but will warn not the recipient of fraudulent emails. Setting the policy to “quarantine” also notifies the domain owner and potentially offers some protection by sending the email to “spam” or “junk” folders, but the result depends on the delivery policy of the email provider and therefore does not provide guaranteed protection.  This means in the run up to the announcement of A-level results on 15 August 2019 and immediately thereafter, the majority of those communicating with universities about course placements could be targeted by fraudsters with emails that appear to come from universities.


GAO Blasts Cybersecurity Efforts of Federal Agencies

GAO Blasts Cybersecurity Efforts of Federal Agencies
The GAO undertook the study to not only determine to what extent the agencies had instituted key elements of a risk management program, but also to find out what challenges these agencies were facing in putting those elements in place. The study also reviewed steps the Office of Management and Budget and the Department of Homeland Security have taken to address their risk management responsibilities. Investigators found was that while all but one agency - the General Services Administration - had installed a cybersecurity risk executive, 16 agencies had not fully established a cybersecurity risk management strategy that outlined boundaries for risk-based decisions. "The risks to IT systems supporting the federal government and the nation's critical infrastructure are increasing as security threats continue to evolve and become more sophisticated," according to the GAO report. "These risks include insider threats from witting or unwitting employees, escalating and emerging threats from around the globe, steady advances in the sophistication of attack technology, and the emergence of new and more destructive attacks. ..."


The Balance Of Power: Self-Service And IT


To prepare data you need to hold an analytic purpose in mind, however tentatively formed. Otherwise you’re not even experimenting or exploring, you’re just playing. Equally, to analyze data is investigate not only its aggregations and patterns, but its structure too. And the more you learn about the structure of data, the more you might tweak it, reshape it, indeed wrangle it to reveal more patterns. Whether you are comparing start and end dates of a process to analyze an elapsed time, or arranging demographic data sets into appropriate age-groups to find useful correlations, or simply concatenating name fields to create a more useful identifier, the distinction between analyzing and wrangling is a weak one; especially so with self-service technologies, because rather than being a cumbersome exchange of requirements between business and IT, this new, empowered analysis typically happens on the desktop of one savvy, and satisfied, business user.


Lack of resources top challenge to IT security


After a lack of resources, respondents cited a lack of experience as their top challenge (37%), followed by a lack of skills (31%). Ultimately, security professionals feel their budgets are not giving them what they need, the survey report said, with only 11% saying security budgets were rising in line with, or ahead of, the cyber security threat level, while the majority (52%) said budgets were rising, but not fast enough. Asked about the source of cyber security threats, 75% said people are the biggest challenge they face in cyber security, followed by processes (12%) and technology (13%). This may explain the need for more resources even as budgets increase, the report said, noting that the people issue is a far more complex one to deal with. Yet at the same time, the report said there are signs of improvement, with more than 60% of IT professionals saying that the profession is getting better – or much better – at dealing with security incidents when they occur, and only 7% saying the profession is getting worse.


FaceApp's Real Score: A Mathematical Face Feature Set

FaceApp's Real Score: A Mathematical Face Feature Set
The accusation was debunked. And FaceApp tried to provide reassurance by saying it discards most photos within 48 hours despite its permissive privacy policy. And while there's been a lot of digging, nothing has surfaced to indicate there's anything more nefarious going on. But it's not the photos themselves that are necessarily what's most valuable to Wireless Labs, the Russian company behind FaceApp. It's the mathematical data describing faces that's derived from the photos, which these days is highly sought after information. How FaceApp works is still very much a black box in a cloud computer. Wireless Labs' founder Yaroslav Goncharov told a Russian publication two years ago he became interested in neural networks - that is, training computers to work in ways that mimic the human brain - during a three-year stint at Microsoft. Facial manipulation and recognition technology is progressing rapidly, and there are shortcut ways to do what FaceApp is doing. One academic paper describes a simplified way to age people or add smiles or glasses that doesn't involve deep neural training.


Remote code execution is possible by exploiting flaws in Vxworks

Remote code execution is possible by exploiting flaws in Vxworks
The vulnerabilities affect all devices running VxWorks version 6.5 and later with the exception of VxWorks 7, issued July 19, which patches the flaws. That means the attack windows may have been open for more than 13 years. Armis Labs said that affected devices included SCADA controllers, patient monitors, MRI machines, VOIP phones and even network firewalls, specifying that users in the medical and industrial fields should be particularly quick about patching the software. Thanks to remote-code-execution vulnerabilities, unpatched devices can be compromised by a maliciously crafted IP packet that doesn’t need device-specific tailoring, and every vulnerable device on a given network can be targeted more or less simultaneously. The Armis researchers said that, because the most severe of the issues targets “esoteric parts of the TCP/IP stack that are almost never used by legitimate applications,” specific rules for the open source Snort security framework can be imposed to detect exploits. VxWorks, which has been in use since the 1980s, is a popular real-time OS, used in industrial, medical and many other applications that require extremely low latency and response time.


Scrum & The Toyota Production System, Build Ultra-Powerful Teams


Scrum is a rhythmic planning method. It is opposed to the traditional batch type approach considering that the construction of a computer system requires to have first completed its analysis before proceeding to the development then the tests. This very cumbersome and costly approach has left many projects deadlocked. On the contrary, Scrum breaks this model by cutting out the construction of the product in small batches called sprints. During a sprint, the team analyzes, develops and tests what the client considers most valuable to him. A sprint lasts between 1 to 4 weeks. At the end of the sprint, during the sprint review, an increment of the product is presented to the customer who can thus quickly provide his feedback. The team corrects and adapts the product sprint after sprint and according to customer. Gradually the product takes shape. In addition to this ongoing adaptation to customer needs, Scrum provides a formal structure for improving team practices by introducing the notion of retrospective. It's a special moment at the end of the sprint during which the team looks back on its practices to improve them in the next sprint.


The lean CDO: why it’s time to develop a minimum viable data product

The lean CDO: why it̢۪s time to develop a minimum viable data product image
“This shift to product-centric delivery models entails co-locating CDOs into business units and strives for constant improvement rather than siloed project metrics,” continued Faria. Bill Swanton, distinguished research VP at Gartner, added that this shift towards product-centric application models didn’t come about randomly. It goes hand-in-hand with the adoption of agile development methodologies and DevOps. “Business leaders are generally unhappy with the speed with which they get application improvements and how they work. Given that no IT organisation gets anywhere near enough funding to do everything everyone wants when they want it, product-centric approaches allow faster delivery of the most important capabilities needed,” he said. According to Gartner, in a product line management model, product lines are funded based on the business capabilities they support. Common or shared capabilities — such as infrastructure, technology, D&A — are funded based on the anticipated and aggregated needs of the product lines they support.



Quote for the day:


"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford


Daily Tech Digest - August 04, 2019

A strategist’s guide to upskilling


Upskilling is not the same as reskilling, a term associated with short-term efforts undertaken for specific groups (for example, retraining steelworkers in air-conditioning repair or locksmithing). Reskilling doesn’t help much if there are too few well-paying jobs available for the retrained employees. An upskilling effort, by contrast, is a comprehensive initiative to convert applicable knowledge into productive results — not just to have people meet classroom requirements, but to have them move into new jobs and excel at them. It involves identifying the skills that will be most valuable in the future, the businesses that will need them, the people who need work and could plausibly gain those skills, and the training and technology-enabled learning that could help them — and then putting all these elements together. To someone accustomed to current forms of workforce training, in which resources are constrained and companies generally operate independently of one another, an upskilling initiative might seem massive and unaffordable.



Are CIOs truly prepared for the next economic downturn?

stockmarket crash wild ride economic downturn recession depression
Moving responsibilities can take time and is disruptive to the overall operation of the IT organization. Multiply that by many individuals and one can quickly see how disruptive organizational changes can be to their culture. Organizational moral drops and added energy must be inserted to stabilize the remaining organization to ensure consistent operations. To prepare for the next decline, organizations must consider a more flexible organizational model that accommodates cross-functional knowledge while breaking down the silos. Focus on functions and parts of the organization that would remain should a dip occur. These functions should include those most directly tied to the intellectual property and critical functions of the company. Use contingent labor or outsourcing agreements to augment additional areas. Contingent labor allows for the ability to scale up or down as demand changes. The second aspect is to avoid long-term commitments. Look out for hard commitments that would limit flexibility to change contracts. Examples include contingent labor, outsourcing, licensing and spending thresholds. Negotiate contract options with flexibility should an economic decline hit. Vendors will be reluctant to agree to this language in their contract. Consider a compromise where commitments are made but have out-clauses should negative economic conditions prevail.


What is the CCPA and why should you care?

California Consumer Privacy Act  / CCPA  >  State flag superimposed on map and satellite view
With every new law, regulation or standard, there are the details that one must comply with, in addition to repercussions of those issues. That alone could fill a few articles.  One of those areas to consider is if your insurance policies will protect you for CCPA related issues. CCPA has a major effect in that area, and some of the areas you need to get your insurance department involved in, which includes professional liability/E&O, directors & officer’s policies, cyber-insurance, employment practices liability, and other areas. A part of your CCPA readiness assessment, ensure that all of the areas where CCPA can impact are identified and brought up to compliance. Like the state, CCPA is huge. Read the details and it’s easy to see that CCPA requires firms to make major infrastructure changes. CCPA mandates a significant amount of new processes around data collection. It requires significant reengineering and rearchitecture how personal data is handled. And like the mountain of the same name in California, CCPA is mammoth. If you think you are in scope for CCPA, take a few days to read everything you can on the topic. The more educated you are about the act, the better you can deal with it. 


The Military-Style Surveillance Technology Being Tested in American Cities

A town viewed from high above via aerial-surveillance camera
When it comes to law enforcement, police are likewise free to use aerial surveillance without a warrant or special permission. Under current privacy law, these operations are just as legal as policing practices whereby an officer spots unlawful activity while walking or driving through a neighborhood. Say an officer sees marijuana plants through the open window of a house. Because the officer is in a public space—a road or sidewalk—he or she doesn’t need permission to see the illicit plants, or a warrant to photograph the scene. The only caveat to police aerial-surveillance activities is that they must employ publicly accessible technology, a term that has been defined, somewhat vaguely, in a small number of court cases. In two cases from the 1980s stemming from investigations in which police used cameras aboard helicopters to spot marijuana plants, the Supreme Court ruled that the law-enforcement agencies had not violated the Fourth Amendment, because both helicopters and commercial cameras are generally publicly available.


How AI can support cybersecurity leaders
Humans consume and process information through reading, watching, and participating in discussions. In a similar manner, AI can be used to train computers in the “language of security” using techniques such as large-scale natural language processing (NLP). This greatly helps in harvesting cybersecurity information to help security analysts work more efficiently and faster. AI and analytics enable a Security Orchestration to automatically block threats, correct problems, respond to attacks and automate low level alerts based upon prior examples or similar historical threats. But it doesn’t stop there – in addition to responding faster, AI can be used as a trusted advisor, capable of offering best practice recommendations. For example, AI can be used to take automatic action when a risky user is detected by either verifying the user and/or suspending the user. It can help reduce the time for the access certification process by providing guidance on risk, taking automatic action on low risk certification and allowing the security personnel to focus on high risk access certifications.  


A dismal industry: The unsustainable burden of cybersecurity

the way to improve security is to make the company boards accountable, and they will pressure the executives to take the right steps -- in a similar way that the Sarbanes-Oxley legislation made directors accountable for company financial reports. However, a lot of companies had trouble finding board members following Sarbanes-Oxley. This could happen again if board members are made accountable for cybersecurity breaches, which seems like an impossible task given the media coverage of larger, more disturbing attacks. Fear, uncertainty, and disaster is a traditional marketing tactic in the IT industry, and cybersecurity companies are happy to focus on the dire need for more spending on their wares and their services.  The scare tactics have been effective, with significant rises in cybersecurity budgets of around 15% annually, says Rothrock. But this takes away money from other IT projects -- projects that could improve revenues. It's an ever-larger black hole of money and human resources that cannot be invested in productivity.


Today's AI 'Revolution' Is More Of An Evolution

Getty Images
The brittleness of today’s systems means companies must also devote considerable resources towards understanding the situations under which they may fail and constructing the necessary cushioning to minimize the impact of such failures on the applications themselves. This can take the form of hand-coded rulesets for the most mission-critical decisions or combining deep learning and classical models, with special handling of cases in which the two diverge beyond a certain threshold. Despite these limitations, deep learning is finding no shortage of applications in the enterprise, automating many tasks that had historically been strongly resistant to codification due to their noisy data, complex patterns or multimedia source data. Yet these applications are typically located outside of the limelight. In contrast to the splashy research demonstrations playing video games or teaching robots how to learn to walk, production deployments today tend to be far more mundane and located in less visible places, from image filtering to chat bots to routing systems. Each deployment displaces human workers that once filled those jobs or reduces the need to hire new workers, but its introduction is typically little publicized and little noticed outside those it immediately effects.


Whistleblower vindicated in Cisco cybersecurity case

The exploit Glenn, 42, discovered would have given an attacker full administrative access to the software that managed video feeds, letting them be monitored from a single location, the lawsuit says. It could also potentially allow unauthorized access to sensitive connected systems. That meant an intruder might have taken control of or bypassed physical security systems such as locks and fire alarms, which are regularly connected to camera systems. "An unauthorized user could effectively shut down an entire airport by taking control of all security cameras and turning them off," the suit says. Airports affected included Los Angeles International and Chicago's Midway, it says. "You could penetrate the entire system. And you could do that without any trace. And have complete backdoor access to the system whenever you wanted," said Michael Ronickher, an attorney representing Glenn with the firm Constantine Cannon LLP.


Trading Strategies Using Deep Reinforcement Learning

RL elements
Reinforcement learning (RL) is about taking suitable action to maximize reward in a particular situation. It is employed by various software and machines to find the best possible behavior or path it should take in a specific situation. Reinforcement learning differs from supervised learning because, in supervised learning, the training data has the answer key with it so the model is trained with the correct answer itself, whereas in reinforcement learning, there is no answer, but the reinforcement agent decides what to do to perform the given task. In the absence of a training dataset, it is bound to learn from its experience. RL refers to a goal-oriented algorithm, that is, algorithms that seek to achieve a complex objective or to maximize the reward through a sequence of steps, such as obtaining the highest score in an Atari game. The elements that conform to this approach are states, a reward function, actions, and an environment in which the agent interacts. Deep Reinforcement Learning is essentially the combination of deep neural networks and reinforcement learning. In this case, we speak of a special type called Q-Learning.


How IoT is revolutionizing facilities data management

It is important to note, however, that data gathered by IoT can accumulate quickly, which can be a double-edged sword. The point of IoT is to be able to analyze all this accumulated data and generate meaningful insights from them. That’s what puts the “smart” in smart technologies. But at unfathomable levels of data that IoT devices are expected to generate, this is easier said than done. This is both the challenge and opportunity for facilities managers who are dealing more and more with IoT-enabled smart buildings and equipment within their operations.When used to collect facilities-related data – such as equipment outputs, electrical consumption, or asset function, for instance – large volumes and varieties of information are sent rapidly to central, Internet-based hubs. Without the proper infrastructure in place, it’s easy for these datasets to become siloed and rendered difficult to utilize. Therefore, rethinking how both how your data is stored and how it’s analyzed is a central requirement if you plan to implement IoT as part of your facilities management analytics strategy.



Quote for the day:


"Leadership is about carrying on when everyone else has given up" -- Gordon Tredgold


Daily Tech Digest - August 02, 2019

Digital Transformation: Are you digitally distraught or digitally determined?

Moving towards an eCommerce solution can require new resources to deliver - stock image courtesy of MSheerin
Online orders are rarely a roadblock in B2B sales. The challenge of self-service lies in presenting complex product information and pricing in a system that’s fast, intuitive and capable of recommending the best solutions for a given customer. Businesses often buy products from manufacturers with varied configurations, order sizes and contract terms. Seemingly similar deals can vary in significant ways, and market prices in B2B aren’t always visible. Nonetheless, buyers expect to find pricing information as easily as they might look up product specs online. ... An online B2B purchase is likely to involve pricing algorithms, product databases, chatbots, market data, automated email and detailed customer profiles built across marketing, sales and support. AI can help instantly identify the optimal next steps for customers by analysing thousands of data points gathered during this process. Customers already expect companies like Netflix and Amazon to anticipate what they need. The same will soon be true of product configurations, add-ons and services. As online transactions become the majority of manufacturers sales, successful implementation of AI will become critical to success.


How industry cloud technology is changing healthcare


There is a role for both large providers and smaller ones to help the healthcare sector make the digital transition. Large cloud computing providers have superior computing power, but not the industry expertise and dedicated support to work with healthcare clients, according to Gartner analysts Gregor Petri and Anurag Gupta. This creates a significant opportunity for managed service provider partners. Smaller cloud computing providers can work with Amazon and Microsoft to build and deliver services while establishing direct relationships with healthcare stakeholders. Smaller providers also can help with implementation and ongoing management of cloud-based applications. In addition, these providers can use HIPAA expertise to satisfy the regulatory requirements that healthcare providers must meet.  For Phil Misiowiec, the Chief Technology Officer of Healthcare Blocks, most of his clients already have a cloud strategy in place when they contact him. Systems being deployed to the Healthcare Blocks platform fall into one of three buckets, Misiowiec said


Why the road to 5G might be longer than expected


"From a consumer's perspective, there will certainly be a transition period," Mark McCaffrey, PwC's US technology, media, and telecommunications leader, told TechRepublic. "We won't simply go from 4G/LTE to 5G overnight and a 5G network won't necessarily be maximized with a device meant for 4G. And as 5G is a new technology, we can expect there to be bugs and glitches that need to be worked out along the way."  Creating a 5G network comes with a whole new set of roadblocks that didn't exist when creating 4G, according to the report. This higher density network brings regulatory, cost, and operational challenges.  "The biggest hurdles to 5G are simply logistical ones. To get to the point of widespread adoption on any scale, we must solve regulatory and infrastructure issues," said McCaffrey. "Each federal, state and local community may have unique requirements in its deployment of 5G. All carriers and equipment manufacturers will need to develop their own path to 5G deployment that meets the regulatory requirements including cybersecurity."  5G implementation also requires hundreds of thousands of small cells to be installed across the country, which calls for large bands of spectrum that aren't yet available, the report found.


Five examples of user-centered bank fraud

SMS swapping has become quite common in the banking industry. First, the attacker steals a victim’s private phone number, along with the phone’s Security ID. Then the attacker calls the SIM card call center claiming they lost their phone, have bought a new SIM card and now need to get their old number back. Using the Security ID and other private information, possibly gathered from snooping on social media accounts, they convince the telecommunication support person to perform the phone swap. This scam can even evade security protections. Most banking institutions that offer multi-factor authentication (MFA) to protect online banking sessions and applications rely on SMS-based MFA instead of using mobile tokens. Once hackers steal people’s phone numbers, they have access to these SMS messages. That means they can access the victim’s account even if it has SMS-based MFA in place. Another old but effective tactic is the Man In-The-Middle (MITM) attack, in which attackers target banking platforms that do not adequately protect their infrastructure. This not only allows hackers to steal money, but also negatively affects the bank’s reputation by making their infrastructure seem fragile and vulnerable.


Monument in Bydgoszcz
There have been many examples of seemingly well-prepared financial institutions caught off-guard by rogue units or rogue traders who weren’t properly accounted for in risk models. To that end, SR 11-7 recommends that financial institutions consider risk from individualmodels as well as aggregate risks that stem from model interactions and dependencies. Many ML teams have not started to think of tools and processes for managing risks stemming from the simultaneous deployment of multiple models, but it’s clear that many applications will require this sort of planning and thinking. Health care is another highly regulated industry that AI is rapidly changing. Earlier this year, the U.S. FDA took a big step forward by publishing a Proposed Regulatory Framework for Modifications to AI/ML Based Software as a Medical Device. The document starts by stating that “the traditional paradigm of medical device regulation was not designed for adaptive AI/ML technologies, which have the potential to adapt and optimize device performance in real time to continuously improve health care for patients.”


Black Hat: A Summer Break from the Mundane and Controllable

Security might be your job, but it's just one more additional thing for laypeople in your organization to worry about. Aside from clear mandates on the topic, compliance-driven requirements, or a recent "near-death" experience, most organizations are still balancing security needs with day-to-day pressing needs in order to win more customers and increase revenue. This is a good thing. Security is asking other people to improve the organization above and beyond what individual workers are held accountable for on a daily basis. It's important to understanding that this is the natural order and that security leaders are likely to encounter pushback on additional security controls. ... To make substantial progress on a security problem in a large 20,000-seat corporate environment you need technology. However, when the underlying risk decisions, business processes, and operations have not been addressed in a meaningful way, products only solve part of the problem and give security leaders a false sense of security. 


Visa Contactless Cards Vulnerable to Fraudsters: Report
Researchers Leigh-Anne Galloway and Tim Yunusov say they were able to manipulate two data fields that are exchanged between the card and the terminal during a contactless payment. This was done by using a proxy machine that manipulates the transaction data between the card and the payment gateway, essentially creating a man-in-the-middle attack, the researchers report. The researchers successfully tested a proxy machine with five U.K. banks, which they did not name. They discovered that the vulnerability is common to all Visa-issued contactless cards regardless of the bank and the locality of the person using the card, according to the blog. "Positive Technologies tested the attack with five major U.K. banks, successfully bypassing the U.K. contactless verification limit of £30 on all tested Visa cards, irrespective of the card terminal," the researchers note. The researchers say that an attack using the proxy machines can go through Google Pay by adding Visa to a digital wallet.


Cyber Warfare: Army Deploys 'Social Media Warfare' Division To Fight Russia

What's as interesting is the West's own use of the mainstream and social media to ensure that Russia and its proxies don't have it all their own way. We have always seen that battle for hearts and minds in the physical sphere. What we've started to see with news of cyberattacks on energy grids in Russia and command and control networks in Iran is the beginnings of the same in cyber. "State and non-state actors are continually seeking to gain an advantage in the grey zone that exists below the threshold of conventional conflict," as General Jones put it. And so, moving forward, you can expect much more of the same. "This restructuring is not the answer to everything," Ingram said, "and nor will or can it meet all current threats, but it is the first step in a journey and that first step gives a series of capabilities—and for the new division with psychological warfare in its structure, that rebranding is important in influencing future Army force development."


Self-organizing micro robots may soon swarm the industrial IoT
The robots already jump, and now they self-organize. The Swiss school’s PCB-with-legs robots, en masse, figure for themselves how many fellow microbots to recruit for a particular job. Additionally, the ad hoc, swarming and self-organizing nature of the group means it can’t fail catastrophically—substitute robots get marshalled and join the work environment as necessary. Ad hoc networks are the way to go for robots. One advantage to an ad hoc network in IoT is that one can distribute the sensors randomly, and the sensors, which are basically nodes, figure out how to communicate. Routers don’t get involved. The nodes sample to find out which other nodes are nearby, including how much bandwidth is needed. The concept works on the same principal as how a marketer samples public opinion by just asking a representative group what they think, not everyone. Ants, too, size their nests like that—they bump into other ants, never really counting all of their neighbors. It’s a strong networking concept for locations where the sensor can get moved inadvertently.


Your multicloud strategy is all wrong
Forced to choose, my guess is most enterprises want the higher-order services from particular clouds more than they want that portability across clouds. The latter may appeal to accounting, but the former appeals to the teams tasked with driving agility and innovation within an enterprise. If you had to pick one of those teams to appease, pick the developers. Every. Single. Time. However, siding with developers doesn’t mean that an enterprise needs to cede control of its IT to a vendor. Rather, by going deep with a vendor, not only does that enterprise put itself in a position to develop more expertise with that cloud, but it also sets itself up as a VIP with that cloud. Anyone who has worked in enterprise software knows that while “all animals are created equal,” following Animal Farm logic, “Some animals are more equal than others.” Vendors always tend to listen to their most committed customers, and that “commitment” isn’t merely a matter of money. The cloud vendors, like all enterprise IT vendors, will tend to partner with those customers who help them to push the envelope on innovation and publish success stories (case studies, conference keynotes, etc.).



Quote for the day:


"It is better to be hated for what you are than to be loved for what you are not." -- André Gide


Daily Tech Digest - August 01, 2019

Dealing with the Disconnect Between Developers and Security

Image: WrightStudio - Adobe Stock
Developers want to write secure code and catch vulnerabilities early on, Fletcher says, but they many not have the necessary skills or management support to focus on prioritizing security. “It is literally more work to do,” he says. There could be organizational challenges, for example, if development functions such as testing are handled in separate groups. Those different groups could have separate charters and mandates to adhere to. “They’re not necessarily working off of the same page at the data level,” Fletcher says. “It becomes difficult to create a symbiotic relationship needed to get to that DevSecOps nirvana.” The disparity is particularly pronounced given the pace of DevOps deployment, compared with non-DevOps software rollouts. The narrow window of time for delivery of DevOps applications can leave little room for security screening. Fletcher says continuous delivery and continuous integration, where DevOps applications are built and delivered in an ongoing basis, can mean deployment of code several times per day. That compares with non-DevOps generated applications that might be released quarterly or biannually.


How Blockchain-Based Digital Credentialing Impacts The World Of Work

uncaptioned
New technologies like blockchain, along with advancements in mobile security, have enabled Workday to imagine a new form of digital credential—one that puts individuals in control of their data, and is portable, authentic, and secure. As credentials are issued by organizations and educational institutions, held by individuals, and shared with employers or prospective employers that need to verify them, blockchain provides a common trust layer, allowing each of these parties to independently verify their authenticity. As the common source of verification, blockchain enables data to move between parties, and its distributed ledger can prove that the data has not been modified and the credentials are still valid. This kind of credential creates a transparent, trustworthy, and reliable source of truth that is instantly authentic once shared. We are also taking this blockchain application one step further with our approach to openness. Technology is most powerful when it’s open and interoperable, and this is especially the case with blockchain.



5G enthusiasm abounds from tech CEOs: Is it warranted?


The enthusiasm about 5G is flowing out of earnings conference calls. The big question is whether it is justified. Aside from carriers touting their 5G build out, Qualcomm CEO Steve Mollenkopf said 5G will be deployed and with devices faster than expected. He said: We now have over one hundred fifty 5G designs launched or in-development using our 5G chipsets. In addition to core chipsets, virtually all our 5G design wins are powered by our complete RF Front-End solutions for 5G Sub6 and / or millimeter wave. By the first calendar quarter of 2020, we anticipate reaching the inflection point as our financial results begin to reflect the benefits of our substantial efforts over the years in to bring 5G to the market worldwide. Qualcomm's take revolves around China ramping 5G commercial service and US carriers all on track with nationwide 5G coverage by mid-2020. There will be more operators and devices launching with 5G relative to 4G in the same time frame, according Qualcomm. Samsung's conference call was also bullish on 5G. Samsung has multiple ways to play 5G with smartphones, networking gear, memory and chips that'll benefit. 


Hacking security alert issued for small planes, DHS warns modern flight systems are 'exploitable'


A security alert was issued by federal officials Tuesday focusing on small planes after authorities voiced concerns that modern flight systems are vulnerable to hacking in the event a malicious actor is able to gain physical access to the aircraft. The alert from the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency said that a security flaw of open electronics systems known as "the CAN bus" was discovered by a Boston-based cybersecurity company and reported to the federal government, which found the systems are "exploitable." "An attacker with physical access to the aircraft could attach a device to an avionics CAN bus that could be used to inject false data, resulting in incorrect readings in avionic equipment," CISA said in its alert. "The researchers have outlined that engine telemetry readings, compass and attitude data, altitude, airspeeds, and angle of attack could all be manipulated to provide false measurements to the pilot." Most airports have security officers in place to restrict unauthorized access.


Podcast: 'Know thy user' a key tenet of modern IT design


The first thing you have to do is know who your user is. If you don't know that, then any design work is going to fall short. And now the design work that systems at IT companies are delivering is not only delivered toward IT but also different contingents within their businesses. It might be developers who are in a LOB trying to create the next service or business application that enables their business to be successful. Again, if we look back, the CIO or leaders in IT in the past would have chosen a given platform, whether a database to standardize on or an application server. Nowadays, that's not what happens. Instead, the LOBs have choices. If they want to consume an open source project or use a service that someone else created, they have that choice. Now IT is in the position of having to provide a service that is on par, able to move quickly and efficiently, and meets the needs of developers and LOBs. And that's why it's so important for design to expand the users we are targeting.


A Realistic Path Forward for Security Orchestration and Automation


The idea of security orchestration and automation is itself "the shiny new thing on the block," Cavey says. However, investing in more technology to solve the problem of disparate tools not working in orchestration is not a silver bullet. Keeping infrastructure and data secure across the entire organization requires staffing, which is one reason why Cavey says he anticipates a number of failed implementations on the horizon. Many companies have unrealistic motivations when they are investing in these platforms, he says.  Those motivations are coming from the pain points an organization is feeling, according to Cavey: "There's incredible pressure coming down from the board for these security teams to be able to say, 'Tell us you have this; tell us we are in good shape. We have an interest in IT security and knowing that we as a company are not going to be the next headline.'" Take data loss prevention (DLP), for example. When introduced nearly a decade ago, DLP's promise to the average CISO was its implementation would protect data and prevent it from being stolen, Cavey explains.


Intent-Based Networking (IBN): Bridging the gap on network complexity

Network World - Insider Exclusive [Winter 2018] - Intent-Based Networking [IBN] - cover art
Undoubtedly, we need new tools, not just from the physical device’s perspective, but also from the traffic’s perspective. Verifying the manual way will not work anymore. We have 100s of bits in the packet, meaning the traffic could be performing numerous talks at one time. Hence, tracking the end-to-end flow is impossible using the human approach. When it comes to provisioning, CLI is the most common method used to make configuration changes. But it has many drawbacks. Firstly, it offers the wrong level of abstraction. It targets the human operator and there is no validation whether the engineers will follow the correct procedures. Also, the CLI languages are not standardized across multi-vendors. The industry reacted and introduced NETCONF. However, NETCONF has many inconsistencies across the vendor operating systems. Many use their own proprietary format, making it hard to write NETCONF applications across multiple vendor networks. NETCONF was basically meant to make the automation easy but in reality, the irregularities it presented actually made the automation even more difficult.


Learning lessons from the unicorns: the tech phenomena

Learning lessons from the unicorns: the tech phenomena image
In the UK, 17 companies have attained unicorn status to date. These include the digital bank, Monzo, which recently reached a milestone of 2 million customers and is launching in the US, and food delivery start-up, Deliveroo, which raised £452 million in a funding round last year and is currently valued at more than £1.5 billion. For private tech companies, an IPO strategy could be an attractive proposition, potentially delivering the funding boost needed to take the business into new markets or allow it to innovate and/or diversify its product or service offering. Instead of focusing purely on financial data to support the move, ambitious businesses pursuing this strategy might seek to emulate the unicorns by concentrating on developing a compelling growth story, based on metrics about user numbers and preferences or rapid take up in a new market. Of course, a clear business plan, which sets out where profits will come from in the future is also essential. Ambitious, fast-growing businesses are among those most likely to consider an IPO. 


15 signs you've been hacked -- and how to fight back

hacked computer security symbol   hacked rot
The best protection is to make sure you have good, reliable, tested, offlinebackups. Ransomware is gaining sophistication. The bad guys using malware are spending time in compromised enterprise environments figuring how to do the most damage, and that includes encrypting or corrupting your recent online backups. You are taking a risk if you don’t have good, tested, backups that are inaccessible to malicious intruders. If you belong to a file storage cloud service, it probably has backup copies of your data. Don’t be overly confident. Not all cloud storage services have the ability to recover from ransomware attacks, and some services don’t cover all file types. Consider contacting your cloud-based file service and explain your situation. Sometimes tech support can recover your files, and more of them, than you can yourself. Lastly, several websites may be able to help you recover your files without paying the ransom. Either they’ve figured out the shared secret encryption key or some other way to reverse-engineer the ransomware. You will need to identify the ransomware program and version you are facing.


BizDevOps tools await enterprise maturity


Splunk execs also firmly believe BizDevOps is where the market is headed, but said a majority of enterprise customers still struggle with it. "Many of our customers still deal with disjointed teams -- it's like DevSecOps, it's heading in that direction, but [BizDevOps] is probably not as close [to widespread adoption] as IT and security," said Tim Tully, CTO of Splunk. "The business side has to become more agile. People are seeing convergence in IT, and the world is evolving, and business has to evolve along with it." IT experts that consult with enterprise clients, however, said that evolution has been very slow so far. "We see organizations that want to close the gap between the IT perspective and business perspective of products. But that means addressing not just features, but defects, risks and debt, and what we see is companies double down on CI/CD" said Carmen DeArdo



Quote for the day:


"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham