Showing posts with label capacity management. Show all posts
Showing posts with label capacity management. Show all posts

Daily Tech Digest - April 15, 2020

Weekly health check of ISPs, cloud providers and conferencing services

thousandeyes map
Outages for ISPs globally were down 9.13% during the week of March 30 from the week before, whereas U.S. outages were down 16.7%, dropping from 120 to 100. Worldwide the outages were also down, from 252 to 229. Public cloud outages rose worldwide from 22 to 25, and in the U.S. there was one outage, up from zero the previous week. Outages for collaboration apps rose dramatically, increasing more than 260% globally and more than 500% in the U.S. over the week before. The actual numbers were an increase from eight to 29 worldwide, and up from 4 to 25 in the U.S. ... During the week April 6-Apri 12, service outages for ISPs, cloud providers, and conferencing services dropped overall. They went from 298 down to 177 globally (40%, a six-week low), and in the U.S. dropped from 129 to 72 (44%). Globally, ISP outages were down from 229 to 141 (38%), and in the U.S. were down from 100 to 56 (44%). Cloud provider outages were also down overall from 25 to 19 (24%), ThousandEyes says, but jumped up from one to six (500%) in the U.S., which saw the highest rate of increase in seven weeks. Even so, the U.S. total was relatively low. “Again, cloud providers are doing quite well,” ThousandEyes says.


A Smattering of Thoughts About Applying Site Reliability Engineering principles

Google has a lot more detail on the principles of “on-call” rotation work compared to project-oriented work. Life of An On-Call Engineer. Of particular relevance is mention of capping the time that Site Reliability Engineers spend on purely operational work to 50% to ensure the other time is spent building solutions to impact the automation and service reliability in a proactive, rather than reactive manner. In addition, the challenges of operational reactive work and getting in the zone on solving project work with code can limit the ability to address the toil of continual fixes. Google's SRE Handbook also addresses this in mentioning that you should definitely not mix operational work and project work on the same person at the same time. Instead whoever is on call for that reactive work should focus fully on that, and not try to do project work at the same time. Trying to do both results in frustration and fragmentation in their effort. This is refreshing, as I known I've felt the pressure of needing to deliver a project, yet feeling that pressure of reactive work with operational issues taking precedence.


Coronavirus: Zoom user credentials for sale on dark web


Analysis of the database found that alongside personal accounts belonging to consumers, there were also corporate accounts registered to banks, consultancies, schools and colleges, hospitals, and software companies, among many others. IntSights said that whilst some of these accounts only included an email and password, others included Zoom meeting IDs, names and host keys. “The more specific and targeted the databases, the more it's going to cost you. A database of random usernames and passwords is probably going to go pretty cheap because it's harder to utilise,” Maor told Computer Weekly. “But if somebody says they have a database of Zoom users in the UK the price is going to get much higher because it's much more specific and much easier to use.” Whilst it is not uncommon at all for usernames and passwords to be shared or sold, Maor said that some of the discussions that followed had been intriguing, with the sale spawning a number of different posts and threads discussing different approaches to targeting Zoom users, many of them focused on credential stuffing attacks.


Remote work will be forever changed post-COVID-19

The problem with these two competing visions is that they assume we'll return to an extreme version of a pre-COVID-19 scenario, either doubling down on traditional remote working arrangements, or spending even more time traveling and sitting in offices, working the way we always did before the virus. I believe that the key lessons many of us will take from this period of enforced remote work are less about location, and more about time and work management. One thing I noticed and confirmed with several colleagues early in my COVID-19 experience was that productive video conferences were mentally more exhausting than an equivalent in-person meeting. A two-hour workshop over videoconference had the same mental drain as an all-day affair in an in-person meeting, especially for the presenters and facilitators. The medium seems to force more intense interactions, and more planning to successfully orchestrate. Collaborating in the same physical space was the pre-COVID-19 norm since it was easy.


Comparing Three Approaches to Multi-Cloud Security Management


IaC is a second approach to multi-cloud management. This approach arose in response to utility computing and second-generation web frameworks, which gave rise to widespread scaling problems for small businesses. Administrators took a pragmatic approach: they modeled their multi-cloud infrastructures with code, and were therefore able to write management tools that operated in a similar way to standard software. IaC sits in between the other approaches on this list, and represents a compromise solution. It gives more fine-grained control over cloud management and security processes than a CMP, especially when used in conjunction with SaaS security vendors whose software can apply a consistent security layer to a software model of your cloud infrastructure. This is important because SaaS is growing rapidly in popularity, with 86% of organizations expected to have SaaS meeting the vast majority of their software needs within two years.  On the other hand, IaC requires a greater level of knowledge and vigilance than either CMP—or cloud-native approaches.


DevOps implementation is often unsuccessful. Here's why

The primary feature of DevOps is, to a certain extent, the automation of the software development process. Continuous integration and continuous delivery (CI/CD) principles are the cornerstones of this concept, and as you likely know, are very reliant on tools. Tools are awesome, they really are. They can bring unprecedented speed to the software delivery process, managing the code repository, testing, maintenance, and storage elements with relatively seamless ease. And if you’re managing a team of developers in a DevOps process, these tools and ​the people who use them are a vital piece of the puzzle​ in shipping quality software. However, while robots might take all our jobs and imprison us someday, they are definitely not there yet. Heavy reliance on tools and automation leaves a window wide open for errors. Scans and tests may not pick up everything, code may go unchecked, and that presents enormous quality (not to mention, security) issues down the track. An attacker only needs one back door to exploit to steal data, and forgoing the human element in quality and security control can have disastrous consequences.


Videoconferencing quick fixes need a rethink when the pandemic abates

tech spotlight collaboration nww by metamorworks gettyimages 1154341603 3x2 2400x1600
A tier down from its immersive telepresence big brother is the multipurpose conference room. Inside offices, companies have designated multipurpose rooms, equipped more minimally with videoconferencing equipment. Instead of spending big bucks on devoting an entire room, with all of the bells and whistles, to an immersive telepresence system, why not outfit a conference room with enough cameras, screens and microphones to offer a good virtual meeting experience, while still leaving the room to be used for general meetings? These multipurpose rooms generally cost a few thousand dollars to outfit with a camera, a microphone array and maybe some integrated digital whiteboards, and a PC or iPad as a control mechanism, Kerravala says. It's a lot more affordable, but a multipurpose conference room still is bandwidth intensive. And it's likely to be tapping bandwidth on the shared network – instead of having its own pipe, as an immersive room would – and that needs to be taken into consideration in network capacity planning.



Information Age roundtable: Harnessing the power of data in the utilities sector

When it comes to data usage across the company, a major aspect to be considered is the trust that is placed in employees. For Graeme Wright, chief digital officer, manufacturing, utilities and services at Fujitsu UK, “data is only trusted with certain people. “Sometimes, it goes across organisational boundaries, because of the third-party suppliers that people are using, and I don’t know if people have really been really incentivised to exploit the value of that data.” Wright went on to explain that the field force “need a different method of interacting to make sure that the data flows freely from them into the actual centre so we can actually analyse it and understand what’s going on”. Steven Steer, head of data at Ofgem, also weighed in on this issue: “This is really central to the energy sector’s agenda over the last year or so. The Energy Data Task Force, an independent task force, published its findings in June, and one of the main findings was the presumption that data is open to all, not just within your own organisation. 



At first glance, low-code and cloud-native don’t seem to have much to do with each other — but many of the low-code vendors are still making the connection. After all, microservices are chunks of software code, right? So why hand-code them if you could take a low-code approach to craft your microservices? Not so fast. Microservices generally focus on back-end functionality that simply doesn’t lend itself to the visual modeling context that low-code provides. Furthermore, today’s low-code tools tend to center on front-end application creation (often for mobile apps), as well as business process workflow design and automation. Bespoke microservices are unlikely to be on this list of low-code sweet spots. It's clear from the definition of microservices above that they are code-centric and thus might not lend themselves to low-code development. However, how organizations assemble microservices into applications is a different story. Some low-code vendors would have you believe that you can think of microservices as LEGO blocks that you can assemble into applications. Superficially, this LEGO metaphor is on the right track – but the devil is in the details.


Graph Knowledge Base for Stateful Cloud-Native Applications

As a rule, stateless applications do not persist any client application state between requests or events. “Statelessness” decouples cloud-native services from client applications to achieve desired isolation. The tenets of microservice and serverless architecture expressly prohibit retention of session-state or global-context. However, while the state doesn’t reside in the container, it still has to live somewhere. After all, a stateless function takes state as inputs. Application state didn’t go away, it moved. The trade-off is that state, and with it any global-context, must be re-loaded with every execution. The practical consequence of statelessness is a spike in network usage, which results in chatty, bandwidth and I/O intensive, inter-process communications. This comes at a price – in terms of both increased Cloud service expenditures, as well as latency and performance impacts on Client applications. Distributed computing had already weakened the bonds of data-gravity as a long-standing design principle, forcing applications to integrate with an ever-increasing number of external data sources. Cloud-native architecture flips-the-script completely - data ships to functions.



Quote for the day:


"Leaders must be good listeners. It's rule number one, and it's the most powerful thing they can do to build trusted relationships." -- Lee Ellis


Daily Tech Digest - April 01, 2020

Providers address capacity, supply-chain challenges brought on by COVID-19

A globe, centered on the United Kingdom, surrounded by global connections.
In terms of physical infrastructure, Netflix had to overcome some supply-chain obstacles. "We have had multiple fires at this point with our supply chain," Temkin said. For example, the primary server manufacturer for Netflix is located in Santa Clara County, Calif., where residents have been ordered to shelter in place. "We had 24 hours to figure out how to get as many of the boxes out of there as we possibly could," he said. Netflix has resolved those supply issues, for the most part, by sourcing elsewhere. "By and large, we've been able to use most of the infrastructure we have deployed. Partners like Equinix have been great about getting cross-connects provisioned quickly where we need them in order to get interconnects beefed up in certain markets," Temkin said. On the content-production side, there's not a lot happening – at Netflix or anywhere else – as studios halt film and TV production to avoid further fueling the outbreak. "One of the big challenges we are trying to figure out is: what parts of it can we restart?" Temkin said.


Key risk governance practices for optimal data security

From cyber security standards to policies around articulating data handling processes and providing transparent updates, the organization needs to clearly understand all of the compliance standards relevant to it. In addition, it needs to make sure its regulatory readiness processes extend to not just internal compliance and risk management but also to compliance with regulations like General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA). This is especially important for heavily regulated industries such as banking, financial services, technology, where many of the organizations’ business models are rooted in customer data To support the two elements above, the organization needs to undertake a sustained effort to seamlessly map out its data handling process across the stages of acquisition, storage, transformation, transport, archival, and even disposal. 


The overriding factor that separates IT and security teams is organizational misalignment; the two teams often report up through different management structures. The executives leading each faction -- the CIO and CISO, respectively -- typically have different goals, which are measured and rewarded by disparate key performance indicators (KPIs). In addition, the CIO is often perceived as being higher in the executive pecking order. To create a culture of shared security across the organization, give the CISO and other IT security leaders more status and authority. Include them in the strategy, planning and early development phases of new IT and application projects and treat them as a trusted partner. Shared authority at the executive level requires shared goals. IT operations and security teams will likely continue to have separate budgets and distinct projects, but hold managers in each organization accountable for common -- or at least comparable and tightly related -- objectives and KPIs.


COVID-19 puts new demands on e-health record systems

Electronic Health Records [EHR] / digital medical data, monitor health status, doctor, laptop
IT staffers are also required to update EHR systems as additional clinical workers are drafted for duty. “Some health providers have reported that they're being kept very busy with setting up processes for quickly onboarding new staff and changing their role within the system,” said Jones. “That requires a change in configuration of the EHR in terms of their role-based access, and in some cases it is creating new user accounts.” As workflows are updated to deal with the COVID-19 response, it is important that EHR systems don’t impede clinicians’ work, are straightforward and seamlessly integrate with existing care delivery processes. “The EHR workflow really needs to disappear into the background as providers ramp up to address COVID-19 capacity surges,” said Jones.  “At a fundamental level, all EHRs need to be working as intended — now more than ever,” said Bensinger. “And not only clinical workflows and features. You want to be sure that the registration and billing components are also collecting accurate and complete information.


Who’s responsible for protecting personal information?

protecting personal information
Americans are split on who should be held most responsible for ensuring personal information and data privacy are protected. Just over a third believe companies are most responsible (36%), followed closely by the individuals providing their information (34%), with slightly fewer holding the government most responsible (29%). Half of Americans don’t give companies (49%) and government (51%) credit for doing enough when it comes to data privacy and protection. Notably, compared to the other countries surveyed, Americans are most likely to put the burden on individuals—in fact, it’s the only country where the individual consumer outranks government as most responsible. “Americans are outliers compared to other countries surveyed in that they are willing to accept a lot of the responsibility in protecting their own data and personal information,” says Paige Hanson, chief of cyber safety education, NortonLifeLock. “This could be the year Americans truly embrace their privacy independence, particularly with the help of new regulations like the California Consumer Privacy Act giving them control over how their data is used.”


Can cloud computing sustain the remote working surge?


Currently, cloud providers are still doing a good job in distributing resources among tenants, but at some point rationing measures may need to be implemented to respond to overwhelming demand. Not all cloud services are going to drown though. Matthew Prince, co-founder and CEO of Cloudflare, said that providers may have “individual challenges spurred by the pandemic” – their ability to cope with the shift in usage is highly dependent on their IT architecture. Major cloud providers such as Amazon have expressed confidence in meeting customer demand for capacity. By and large, public cloud providers seem to be coping well with the skyrocketing demand – there has yet to be any issues of major cloud crashes just yet. What providers should really be concerned about is the challenges that will come post-pandemic. By then, enterprises would have already recognized the unquestionable value of cloud, and will double down on cloud migrations. Cloud providers must make sure that their data infrastructure is prepared to support data at unprecedented scales. Warren Buffet once remarked: “you will only find out who is swimming naked when the tide goes out.”


Writing Microservices in Kotlin with Ktor—a Multiplatform Framework for Connected Systems


Ktor (pronounced Kay-tor) is a framework built from the ground up using Kotlin and coroutines. It gives us the ability to create client and server-side applications that can run and target multiple platforms. It is a great fit for applications that require HTTP and/or socket connectivity. These can be HTTP backends and RESTful systems, whether or not they’re architectured in a microservice approach. Ktor was born out of inspiration from other frameworks, such as Wasabi and Kara, in an aim to leverage to the maximum extent some of the language features that Kotlin offers, such as DSLs and coroutines. When it comes to creating connected systems, Ktor provides a performant, asynchronous, multi-platform solution. Currently, the Ktor client works on all platforms Kotlin targets, that is, JVM, JavaScript, and Native. Right now, Ktor server-side is restricted to the JVM. In this article, we’re going to take a look at using Ktor for server-side development. ... routing, get, and post are all higher-order functions. In this case, we’re talking about taking functions as parameters. Kotlin also has a convention that if the last parameter to a function is another function, we can place this outside of the brackets.


Get ready for the post-pandemic run on cloud

Get ready for the post-pandemic run on cloud
Business seems to change around pain. In the past weeks companies that had already migrated to public cloud had a strategic advantage over those still operating mostly in traditional data centers.  Traditional data centers are the responsibility of enterprise IT, and as such they are run by human employees who have to deal with mandatory lockdowns or even self-quarantine and may not be able to operate remotely. I have a CIO friend of mine who has a down physical storage system and a direct replacement sitting next to it, shrink-wrapped and ready to be installed. So far, he can’t get enough qualified staffers physically in the data center to make the swap. As a result, a major system is not operating, and they are losing millions a week. Those who have migrated to public clouds don’t have to deal with such things. The virtual and ubiquitous nature of cloud computing that scared so many IT pros during the past several years is actually one of the major reasons to move to public cloud. The weakness for enterprise IT recently has been the inability to support a physical set of systems that need physical fixes by humans.


Using Zoom while working from home? Here are the privacy risks to watch out for


Privacy experts have previously expressed concerns about Zoom: In 2019, the video-conferencing software experienced both a webcam hacking scandal, and a bug that allowed snooping users to potentially join video meetings they hadn't been invited to. This month, the Electronic Frontier Foundation cautioned users working from home about the software's onboard privacy features. Here are some of the privacy vulnerabilities in Zoom that you should watch out for while working remotely. ... Employers, managers and workers-from-home, beware. Zoom's tattle-tale attention-tracking feature can tell your meeting host if you aren't paying attention to their meticulously-composed visual aids. Whether you're using Zoom's desktop client or mobile app, a meeting host can enable a built-in option which alerts them if any attendees go more than 30 seconds without Zoom being in focus on their screen.  If you're anything like me, your Zoom meetings rarely consume your full screen. Jotting down notes in a separate text file, adding dates to calendars, glancing at reference documents or discreetly asking and answering clarifying questions in a separate chat -- these key parts of any normal meeting are all indicators of an engaged listener.


Neural computing should be based on insect brains, not human ones

A drone, hovering in the woods.
Marshall is referring to a form of deep-learning computing for which developers are creating electronic architectures that mimic neurobiological architectures that could replace traditional computing. Deep-learning computing falls within artificial intelligence in which computers learn through rewards for recognizing patterns in data. A difference is that in deep learning neural processes are used. Variations include neuromorphic computing that I wrote about here that can analyze high- and low-level detail such as edges and shapes. Bees “are basically mini-robots,” says Marshall, quoted in the Daily Telegraph. “They’re really consistent visual navigators, they can navigate complex 3-D environments with minimal learning and using only a million neurons in a cubic millimeter of the brain.” That size element could grab the attention of developers who are working toward tiny robots that communicate with each other to self-organize and could be used, for example, to move objects in factories.



Quote for the day:


“When I look at...great experiences, it’s often more to do with the DNA than the MBA.” -- Shaun Smith


Daily Tech Digest - March 28, 2020

Coronavirus transforms peak internet usage into the new normal


"We've been watching the network very closely," said Joel Shadle, a spokesman for Comcast. "We're seeing a shift in peak usage. Instead of everyone coming home and getting online, we're seeing sustained usage and peaks during the day." AT&T reported Monday that on Friday and again on Sunday it hit record highs of data traffic between its network and its peers, driven by heavy video streaming. The company also said it saw all-time highs in data traffic from Netflix on Friday and Saturday with a slight dip on Sunday. And the company reported that its voice calling traffic has been way up, too. Wireless voice calls were up 44% compared to a normal Sunday; Wi-Fi calling was up 88% and landline home phone calls were up 74%, the company said in its press release Monday.  AT&T also said it has deployed FirstNet portable cell sites to boost coverage for first responders in parts of Indiana, Connecticut, New Jersey, California and New York. Cloudflare, which provides cloud-based networking and cybersecurity services and which has been tracking worldwide data usage, noted in a blog post last week that it had seen network usage increase as much as 40% in Seattle, where the coronavirus first broke out in the US.



The Ecommerce Surge: Guarding Against Fraud

As more consumers shift to online shopping during the COVID-19 pandemic, retailers must ramp up their efforts to guard against ecommerce payment fraud, says Toby McFarlane, a cybersecurity expert at CMSPI, a payments consultancy. "Retailers should have in place already tools to monitor fraud and approval rates" so they can be benchmarked, McFarlane says in an interview with Information Security Media Group. "If you see a spike in fraud, for example, you want to know if that's a general industry trend or if that is something specific to your business." The shift toward ecommerce in recent weeks presents opportunities to gain a competitive advantage, McFarlane says. "We've seen average transaction values are increasing online, so if merchants can ensure their online infrastructure and experience is set up to handle that, then we could see certain merchants taking market share from non-optimized merchants," he says.


How to refactor the God object antipattern


It's not good enough to simply write code that works. That code must be easily maintained, enhanced and debugged when problems happen. One of the reasons why object-oriented programming is so popular is because it delivers on these requirements. But antipatterns often appear when developers take shortcuts or focus more on the need to get things done instead of done right. One of those common antipatterns is the God object. One of the main concepts in object-oriented programming is that every component has a single purpose, and that component is only responsible for the properties and fields that allow it to perform its pertinent functions. ... Good object-oriented design sometimes takes a back seat to a need to get things done, and the single responsibility model gets thrown out the window. Then, out of nothingness, the God object emerges. In simple terms, the God object is a single Java file that violates the single-responsibility design pattern because it: performs multiple tasks; declares many unrelated properties; and maintains a collection of methods that have no logical relationship to one another, other than performing operations pivotal to the application function.


What’s Next in DevOps?

What’s Next in DevOps?
DevOps is aimed at "actualizing agile" by ensuring that teams have the technical capabilities to be truly agile, beyond just shortening their planning and work cadence. Importantly, DevOps also has Lean as part of its pedigree. This means that there is a focus on end-to-end lifecycle, flow optimisation, and thinking of improvement in terms of removing waste as opposed to just adding capacity. There are a huge number of organisations and teams that are still just taking their first steps in this process. For them, although the terminology and concepts may seem overwhelming at first, they benefit from a wide range of well-developed options to suit their development lifecycle needs. I anticipate that many software tools will be optimizing for ease-of-use, and continuing to compete on usability and the appearance of the UI. Whereas most early DevOps initiatives were strictly script and configuration file based, more recent offerings help to visualise processes and dependencies in a way that is easily digested by a broader segment of the organization.


Tips for cleaning data-center gear in response to coronavirus

server room / data center
Dell has come up with some guidance for cleaning its data center products. It's well timed, as data-center operators are tasked with implementing access and cleaning procedures in response to COVID-19. It's a real issue. The two biggest data center and colocation providers, Equinix and Digital Reality Trust, are restricting visitors to their facilities for the time being. Since the hardware in colocation data center is owned by the clients, they have every right to visit the facility to perform maintenance or upgrades – but not for now. Meanwhile, data-center staff have been declared essential and are exempt from California's "stay at home" order, so like grocery store and banking staff, data center workers can go to work. Right off the bat, Dell acknowledges that its data center products "are not high touch products," and that data centers should have a clean room policy where people are required to sanitize their hands before they enter. If your gear does need sterilization, Dell recommends engaging a professional cleaning company that specializes in sterilizing data center equipment. If that's not possible, then you can do it yourself as a last resort.


States of shock: Recovering our tech economy after COVID-19


Segal says the effects of the current economic downturn may be compounded by crises of confidence throughout the world, and reactions to the uncertain nature of the virus' transmissivity path — particularly in those countries where uncertainty preceded action. But that uncertainty, being a psychological factor, could be remedied in short order, giving her optimism that the global economy, including technology, could resume its previous course by the end of 2020. "We've certainly had at least a pause," remarked ZDNet contributor Ross Rubin, principal analyst with Reticle Research. He noted Apple's warning of supply chain disruptions for components for iPhone and other devices. As a supplier itself, it first closed its retail outlets inside China, and later as infection cases within China subsided, reopened those stores at roughly the same time it closed its retail outlets outside China. "The reports that we're getting back now is that the factories are starting to gear up again," Rubin continued. For example, Apple has announced product refreshes for iPad, still on schedule for May. "There seems to be some confidence there that, while those products do not ship in anywhere near the same volumes as iPhones — particularly the iPad Pro, which is a more premium product — they are introducing new, cellular-enabled products."


Aisera: The Next Generation For RPA (Robotic Process Automation)

Torso Of IT Manager Activating RPA Application
A good way to look at this is as a simple equation: AI + RPA = Conversational RPA. When you converge AI and RPA, you get Conversational RPA. AI provides a human-like dialogue interface for users providing similar consumer-like application experiences, like those of Alexa, Whatsapp, Instagram, and Snapchat. This simple natural human-like interface interacts and performs the duties, tasks, IT workflows, and business workflows. RPA is used to automate simple and complex workflows that are highly repetitive that are typical of back-office functions. Most of these should not require humans to manage, monitor or execute them. Conversational RPA’s self-learning ability reduces the barrier for user adoption and lends itself to expediting complex challenges like cloud and application integrations, compliance, audit trail creation, and user experience analysis that require complex workflows. Conversational RPA supports new workflows, existing workflows and provides a way to customize workflows to meet business needs.


Automate security testing and scans for DevSecOps success


Automated security testing analyzes environments to make sure they meet expectations. Organizations mandate particular environment configurations to meet security and performance goals, but you don't know that the configuration is as expected without testing. Processes like white box and black box testing can help QA engineers pinpoint potential vulnerabilities before it's too late. If configuration is out of specification, the software team can halt the release and remediate the security deficiencies themselves, or alert the security team. Remediation on the fly might be the better option if automation is in place, such as declarative configuration management, to handle configuration drift. If you have both red teams -- aggressive fake attackers -- and blue teams -- their counterparts enacting defenses -- in security, this is also the phase in which you should launch real attacks against your code. If the app can't handle it, it's time to go back to the drawing board with the developers to make the product more resilient. If the app passes, push to production with peace of mind.


Quantum entanglement breakthrough could boost encryption, secure communications


Generating photons at two micrometres had never been demonstrated before. A major challenge for the researchers was to get their hands on the appropriate technology to conduct their experiment. "You need detectors that are able to see single photons at two micrometres, and we had to develop the right technology for these measurements," says Clerici. "And on the other side, you also need a specific piece of technology to generate the photons." In partnership with technology manufacturer Covesion, Clerici and his team engineered a nonlinear crystal that was suitable for operating at two micrometers. Photons are generated when short pulses of light from a laser source pass through the crystal. In theory, the entangled photons generated at the new wavelength should be able to travel as far as the photons generated through existing methods, and used for satellite communication. But the new experiment is still in its early stages, and Clerici said that the team hasn't yet identified how much information the new technology can communicate, or how quickly.


Google's MediaPipe Machine Learning Framework Web-Enabled with WebAssembly


The browser-enabled version of MediaPipe graphs is implemented by compiling the C++ source code to WebAssembly using Emscripten, and creating an API for all necessary communications back and forth between JavaScript and C++. Required demo assets (ML models and auxiliary text/data files) are packaged as individual binary data packages, to be loaded at runtime. To optimize for performance, MediaPipe’s browser version leverages the GPU for image operations whenever possible, and resort to the lightest (yet accurate) available ML models. The XNNPack ML Inference Library is additionally used in connection with the TensorflowLite inference calculator (TfLiteInferenceCalculator), resulting in an estimated 2-3x speed gain in most of applications. Google plans to improve MediaPipe’s browser version and give developers more control over template graphs and assets used in the MediaPipe model files. Developers are invited to follow the Google Developer twitter account.



Quote for the day:


"Leadership is the other side of the coin of loneliness, and he who is a leader must always act alone. And acting alone, accept everything alone." -- Ferdinand Marcos


Daily Tech Digest - March 23, 2020

You Need to Know SQL Temporary Table


We have been warned to NOT write any business logic in databases using triggers, stored procedures, and so on. It doesn’t mean we don’t need to know database systems. Being competent in database systems could save us a lot of work. For example, managers or customers often send us an email or a short notice asking for some one-off reports. Then we need to quickly log into the database servers and generate reports with either a list of parameters or a CSV file from requesters. ... There are two types of temporary tables: local and global temporary tables. Both of them share similar behaviors, except that the global temporary tables are visible across sessions. Moreover, the two types of temporary tables have different naming rules: local temporary tables should have names that start with a hash symbol (#); while the names of global temporary tables should start with two hash symbols (##). All temporary tables are stored in System Databases -> tempdb -> Temporary Tables.



Remote work tests corporate pandemic plans


IT leaders across the country are shifting gears from accommodating short-term remote work strategies for snowstorms, hurricanes and other natural disasters to how to help workers plan for and remain productive in a longer-term remote work environment. Due to the duration of the pandemic, Miami-based ChenMed, an operator of 60 senior health centers in the eastern U.S., intends to offer the small number of 2,500 users who don't have a laptop, such as front desk staff, the opportunity to take home their desktops so they can continue to answer patient calls and conduct other business. "Yes, it creates a lot more complexity in helping users set that up, but we want them to have a great experience versus trying to use an old computer at home," CIO Hernando Celada said. This strategy gives him confidence that the machines will be secure when the time comes for workers to be sent home, which will be at the first sign of community spread of the virus because ChenMed's patient population is the most vulnerable.


Private cloud reimagined as equal partner in multi-cloud world

hybrid cloud
Forrester's Gardner argues that repatriation is not a broad trend. "It's simply not true," he says. There may be some companies moving a specific application back to the private cloud for performance, regulatory or data gravity reasons, but repatriation is a relatively isolated phenomenon. The latest Gartner thinking on repatriation is in agreement with Gardner. "Contrary to market chatter that customers are abandoning the public cloud, consumption continues to grow as organizations leverage new capabilities to drive transformation. Certain workloads with low affinities to public cloud may be repatriated, largely because the migrations were not sufficiently thought through. But few organizations are wholly abandoning the public cloud at any technology layer," reads a 2019 Gartner report from analysts Brandon Medford, Sid Nag and Mike Dorosh. Warrilow says flatly, "Repatriation in net terms is not happening." He adds that there will always be a small number of workloads that go back to the private cloud as part of an organization's ongoing evaluation of the best landing spot for specific workloads.


What’s New in SQL Monitor 10?

SQL Monitor does the best job it can, out of the box, of setting up a useful core set of metrics and alerts, with sensible thresholds. However, the right alerts and the right thresholds are 100% dependent on your systems. A group or class of servers may all need the same alert types with the same thresholds, but these may well be different from those for other classes of server. Also, your group of VMWare-based servers, for example, may need different thresholds than your bare-metal servers for the same set of memory-related alerts. Configuring all this in the GUI, server-by-server, can be time consuming and it’s easy to introduce discrepancies. This alert configuration task, just like any other SQL Server management or maintenance task should be automated. With the PowerShell API, you now write PowerShell scripts to set up the alerts on a machine in a way that is exactly in accordance with your requirements. You then use that as a model to copy all the settings to other machines, or just groups of machines.


Can APIs be copyrighted?

Can APIs be copyrighted?
The law is very clear about copyright. If a programmer writes down some code, the programmer owns the copyright on the work. The programmer may choose to trade that copyright for a paycheck or donate it to an open source project, but the decision is entirely the programmer’s. An API may not be standalone code, but it’s still the hard work of a person. The programmers will make many creative decisions along the way about the best or most graceful way to share their computational bounty. ... APIs are purely functional and the copyright law doesn’t protect the merely functional expressions. If you say “yes” to a flight attendant offering you coffee, you’re not plagiarizing or violating the copyright of the ancient human who coined the word “yes.” You’re just replying in the only way you can. Imagine if some clever car manufacturer copyrighted the steering wheel and the location of the pedals. The car manufacturers have plenty of ways to get creative about fins and paint colors. Do they need to make it impossible to rent or borrow a car without a lesson on how to steer it? The law recognizes that there are good reasons not to allow copyright to control functional expressions.


From Zero to Hero: CISO Edition

With new attacks forming faster than the technologies to fight them, holding CISOs to an entirely unrealistic standard doesn’t actually serve anyone. The truth is that no matter how many technologies are deployed or how good the security posture is, 100% protection from cyberattacks is simply not possible. Perhaps senior leadership and boards of directors are finally starting to acknowledge this fact, or perhaps they're starting to realize that a successful response to an attack, along with actions by other parts of the organization, contribute to the ultimate scale and scope of the event. CISOs are uniquely capable of gauging cyber-risk and how to reduce it. Experienced CISOs understand the threats their companies face and know how to deploy the optimal mix of people, processes, and technologies, weighed against threats, to provide the best possible level of protection. Organizations that understand this are leading the charge in shifting the perception of the CISO from technical manager to strategic risk leader.


Most common cyberattacks we'll see in 2020


By convincingly impersonating legitimate brands, phishing emails can trick unsuspecting users into revealing account credentials, financial information, and other sensitive data. Spear phishing messages are especially crafty, as they target executives, IT staff, and other individuals who may have administrative or high-end privileges. Defending against phishing attacks requires both technology and awareness training. Businesses should adopt email filtering tools such as Proofpoint and the filtering functionality built into Office 365, said Thor Edens, director of Information Security at data analytics firm Babel Street. Business-focused mobile phishing attacks are likely to spread in 2020, according to Jon Oltsik, senior principal analyst for market intelligence firm Enterprise Strategy Group. As such, IT executives should analyze their mobile security as part of their overall strategy. "Spam filters with sandboxing and DNS filtering are also essential security layers because they keep malicious emails from entering the network, and protect the user if they fall for the phishing attempt and end up clicking on a malicious hyperlink," said Greg Miller, owner of IT service provider CMIT Solutions of Orange County.


Las Vegas shores up SecOps with multi-factor authentication


Las Vegas initially rolled out Okta in 2018 to improve the efficiency of its IT help desk. Sherwood estimated the access management system cut down on help desk calls relating to forgotten passwords and password resets by 25%. The help desk also no longer had to manually install new applications for users because of an internal web portal connected to Okta that automatically manages authorization and permissions for self-service downloads. That freed up help desk employees for more strategic SecOps work, which now includes the multi-factor authentication rollout. Another SecOps update slated for this year will add city employees' mobile devices to the Okta identity management system, and an Okta single sign-on service for Las Vegas citizens that use the city's web portal. Residents will get one login for all services under this plan, Sherwood said. "If they get a parking citation and they're used to paying their sewer bill, it's the same login, and they can pay them both through a shopping cart."


Coronavirus challenges capacity, but core networks are holding up

A stressed employee works alone in a dimly lit office.
Increased use of conferencing apps may affect their availability for reasons other than network capacity. For example, according to ThousandEyes, users around the globe were unable to connect to their Zoom meetings for approximately 20 minutes on Friday due to failed DNS resolution. Others too are monitoring data traffic looking for warning signs of slowdowns. “Traffic towards video conferencing, streaming services and news, e-commerce websites has surged. We've seen growth in traffic from residential broadband networks, and a slowing of traffic from businesses and universities," wrote Louis Poinsignon a network engineer with CloudFlare in a blog about Internet traffic patterns. He noted that on March 13 when the US announced a state of emergency, CloudFlare’s US data centers served 20% more traffic than usual. Poinsignon noted that Internet Exchange Points, where Internet service providers and content providers can exchange data directly (rather than via a third party) have also seen spikes in traffic. For example, Amsterdam (AMS-IX), London (LINX) and Frankfurt (DE-CIX), a 10-20% increase was seen around March 9.



With a large segment of the population confined to their homes having to consume bandwidth, the internet free-for-all we have enjoyed to date is all but done. Emergency legislation or an executive order needs to be enacted to limit video content streaming to 720p across all content services, such as from Netflix, Hulu, Apple TV, Disney+, YouTube, and other providers. Traffic prioritization and shaping need to be put in place for core business applications during prime hours, which includes video conferencing for business and personal use. This would effectively be the opposite of net neutrality, as an emergency measure. Internet video streaming traffic should be prioritized for essential news providers, and the government should provide incentives for them to broadcast their content (and for home-bound citizens to consume it) over-the-air (OTA) so that additional bandwidth can be freed up. Remember the antenna and devices with built-in tuners? It may be an appropriate time to shift some programming back to the airwaves, and even bring back the DVR, so that programming can be transferred to devices during off-hours when networks aren't saturated.



Quote for the day:


"Individual commitment to a group effort - that is what makes a team work, a company work, a society work, a civilization work." -- Vince Lombardi


Daily Tech Digest - July 18, 2019

CIOs must play a key role in ecosystem strategies

CIOs must play a key role in ecosystem strategies
Digital technologies emphasize the need for a more agile IT strategy with technology investments that support the future needs of the business. IT organizations will need to be multi-speed, taking advantage of business opportunities such as boosting customer engagement via new digital channels or winning emerging markets customers—alongside their traditional role as providers of technology capabilities and solutions. Information technology is critical in satisfying the heightened need for data insights, as today’s executives seek accurate, real-time information that supports decision making, reduces risk, and helps drive improvements. Accenture Strategy research, Cornerstone of future growth: Ecosystems, shows that companies in the United States clearly see advantages in ecosystems, and almost half of those surveyed are actively seeking them. Accenture Strategy surveyed 1,252 business leaders from diverse industries across the world, including 649 in the United States, to better understand the degree to which companies are capturing ecosystem opportunities. Survey results indicated executives’ desire to lead through adaptation and adoption. 



Digital transformation in the construction industry: is an AI revolution on the way? image
There is an appetite to change, with the construction industry looking at a range of technologies, on top of AI, that could help them in the future, with virtual reality (28%), cloud computing (24%), software defined networking (20%), blockchain (19%) and Internet of Things (17%) all seen as key to future development by those in larger organisations. According to Tech Nation’s 2018 report, technology is expanding 2.6 times faster than the rest of the UK economy, and yet the construction industry has been slow to implement digitalisation strategies that could bring increased efficiency and collaboration as well as reduced costs. The majority of the construction firms surveyed said they have either completed a digital transformation project or have one currently underway — over half (61%) noted improved efficiency and reduced operational costs (58%) as direct advantages.


Maintaining Security As You Invest in UCC

istock 980534858
Today’s modern workforce expectations for usability are unprecedented. Your users expect all their software to be simple to use and just work, no exceptions. The same goes for your customers – they don’t have the patience for a collaboration tool that isn’t immediately connected or intuitive. Don’t allow this expectation for ease-of-use to push you to security shortcuts. Users and administrators must be sensitive to default settings of the web applications being used to host their online meetings, and ensure permissions are set with both user experience and security at top of mind. Remind users to keep browsers up-to-date, including the latest security patches. Collaboration tools should never bypass operating system or browser security controls for the sake of simplicity to the end user. The risk is far greater than the reward.  Meeting spaces should be safe spaces for open collaboration and discussion. But online meetings have opened up every internal conversation to external hackers in a way that in-office meetings never have.



Is It The Platform Or Is It The Ecosystem?

The key to ecosystems is understanding that they represent a whole new economy. Apple’s App Store succeeded in part because of the extensive advocacy for Apple at the launch of the iPhone. At the time, I worked inside Nokia, and we could barely get airtime for Nokia innovations in the face of all the content that encircled an incumbent in a powerful industry like computing. It was only after a year or so that Apple understood it was creating opportunity for ‘the little guy’. It stumbled upon success with apps but its ecosystem was there long before Steve Jobs gave it the red light. Ecosystems thrive on information and content. They also thrive when they create multiple avenues for new businesses, as per the Airbnb example above. Like anything, they need strong branding and that gives incumbents the advantage.They thrive when they breach the walls of an established industry, allowing entrepreneurial passion to pour in. In healthcare, GE tried to establish an ecosystem for breast cancer diagnostics but in reality it only let in established healthcare firms.


5 Important Ways Jobs Will Change In The 4th Industrial Revolution

The Future Of Work: 5 Important Ways Jobs Will Change In The 4th Industrial Revolution
Rather than succumb to the doomsday predictions that “robots will take over all the jobs,” a more optimistic outlook is one where humans get the opportunity to do work that demands their creativity, imagination, social and emotional intelligence, and passion. Individuals will need to act and engage in lifelong learning, so they are adaptable when the changes happen. The lifespan for any given skill set is shrinking, so it will be imperative for individuals to continue to invest in acquiring new skills. The shift to lifelong learning needs to happen now because the changes are already happening. In addition, employees will need to shape their own career path. Gone are the days when a career trajectory is outlined at one company with predictable climbs up the corporate ladder. Therefore, employees should pursue a diverse set of work experiences and take the initiative to shape their own career paths. Individuals will need to step into the opportunity that pursuing your passion provides rather than shrink back to what had resulted in success in the past. 


Network capacity planning in the age of unpredictable workloads


To plan realistic capacity requirements, formal network engineers dive into the complex math of the Erlang B formula, and if you are inclined to learn it, check out the older book James Martin's Systems Analysis for Data Transmission. However, there are also easier rules of thumb. As a connection congests, it increases the risk of delay and packet loss in a nonlinear fashion. This tenet contributes to network capacity planning fundamentals. Problems ramp up slowly until the network reaches about 50% utilization; issues rise rapidly after that threshold. At 70% utilization, delay doubles, for example. Keep the connection, or gateway utilization, around the 50% level to avoid congestion during peaks. Unexpected traffic peaks often occur when a single transaction launches a complex multicomponent workflow and especially when traffic changes because of failover or scaling. The most significant network capacity planning decision is how to size the DCI network. It is the hub of all workflows, into and out of the cloud and to and from workers and internet users. The DCI network must never become congested.


The lost art of ethical decision making

ethics.jpg
Ethics need not be wildly complex, nor must you exemplify saintly behaviors or be infallible in your decision making. As you lead your teams, try to apply these guidelines. Implement the "newspaper test": When faced with a complex decision, especially one in which you're faced with a variety of bad options, imagine that an account of your decision and the behaviors and process that got you there were published in a front-page newspaper story. Would you be a sympathetic character who weighed the various options, treated the parties fairly, and respected your obligations as a leader, even though the outcome wasn't all rainbows and unicorns, or would you be portrayed as slyly manipulating circumstances for your benefit? Perhaps one of the most challenging concepts is that of "fairness," particularly around the human tendency to conflate fairness of a process with fairness of outcome. The former should be the goal of your own ethical standards, as that provides all parties with similar consideration, information, and standards. Where trouble arises is when you attempt to create a "fair" outcome that causes you to treat various parties and factors differently to justify an end result.


Protecting the edge of IoT as adoption rates for the technology grow

Protecting the edge of IoT as adoption rates grow image
You can see the problem: with the rapid increase of more edge devices, the risk of a data breach only multiplies for enterprises. Last year, for example, there were 1,244 data breaches, exposing 446.5 million records. This not only leads to significant business obstacles, but breaches also come at a high price — Ponemon Institute estimates the average cost of a data breach to exceed $3.5 million. This broader array of environments, coupled with the prevalence of data breaches, make it critical for enterprises to secure their computing infrastructure. “With the growth of IoT and the rising cost of data breaches, enterprises need a secure computing infrastructure more than ever,” confirms Damon Kachur, vice president, IoT Solutions, Sectigo. To meet this demand, Sectigo — the commercial Certificate Authority (CA) — have entered into a secure edge computing technology pact with NetObjex — an intelligent automation platform for tracking, tracing and monitoring digital assets using AI, blockchain, and IoT. 


Companies with zero-trust network security move toward biometric authentication

Composite image of binary code and biometric fingerprint scanning authorization.
"Fundamentally we've all figured out that you can't trust everything just because it's on the inside of your firewall; just because it's on your network," says Wendy Nather, director of Advisory CISOs at Duo Security, a multi-factor authentication (MFA) solutions provider that is now part of Cisco Systems. "So, if you agree with that, the question becomes: What are we trusting today that we really shouldn't be trusting and what should we be verifying even more than we have been? The answer is really that you have to verify users more carefully than you have before, you have to verify their devices and you need to do it based on the sensitivity of what they're getting access to, and you also need to do it frequently, not just once when you let them inside your firewall." "You should be checking early and often and if you're checking at every access request. you're more likely to catch things that you didn't know before," Nather says.


Lateral phishing used to attack organisations on global scale


Out of the organisations targeted by lateral phishing, more than 60% had multiple compromised accounts. Some had dozens of compromised accounts that sent lateral phishing attacks to additional employee accounts and users at other organisations. In total, researchers identified 154 hijacked accounts that collectively sent hundreds of lateral phishing emails to more than 100,000 unique recipients. A recent benchmarking report by security awareness training firm KnowBe4 shows that the average phish-prone percentage across all industries and sizes of organisations at 29.6% – up 2.6% since 2018. Large organisations in the hospitality industry have the highest phish-prone percentage (PPP) of 48%, and are therefore most likely to fall victim to a phishing attack, while the transportation industry is at the lowest risk, with large organisations in the sector scoring a PPP of just 16%. Because lateral phishing exploits the implicit trust in the legitimate accounts compromised, these attacks ultimately lead to increasingly large reputational harm for the initial victim organisation, the researchers said.



Quote for the day:


"It is not enough to have the right ingredients, you must bake the cake." -- Tim Fargo


Daily Tech Digest - August 16, 2018

U.S. Treasury: Regulators should back off FinTech, allow innovation

edge computing budgets up spending fintech circuitry ben franklin
"Banks are very adept at innovating and experimenting with new products and services. The catch is the implementation of those products and services to ensure data privacy and security; it may take months or longer to prove data privacy and security efficacy," Steven D’Alfonso, a research director with IDC Financial Insight said. The federal agency, however, specifically identified a need to remove legal and regulatory uncertainties that hold back financial services companies and data aggregators from establishing data-sharing agreements that would effectively move firms away from screen-scraping customer data to more secure and efficient methods of data access. Today, many third-party data aggregators unable to access consumer data via APIs resort to the more arduous method of asking consumers to provide account login credentials (usernames and passwords) in order to use fintech apps. "Consumers may or may not appreciate that they are providing their credentials to a third-party, and not logging in directly to their financial services company," the report noted.


Are microservices about to revolutionize the Internet of Things?

Are microservices about to revolutionize the Internet of Things?
Individual edge IoT devices typically need to be extremely power efficient and resource efficient, with the smallest possible memory footprint and consuming minimal CPU cycles. Microservices promise to help make that possible. “Microservices in an edge IoT environment can also be reused by multiple applications that are running in a virtualized edge,” Ouissal explained. “Video surveillance systems and a facial recognition system running at the edge could both use the microservices on a video camera, for example.” Microservices also bring distinct security advantages to IoT and edge computing, Ouissal claimed. Microservices can be designed to minimize their attack surface by running only specific functions and running them only when needed, so fewer unused functions remain “live” and therefore attackable. Microservices can also provide a higher level of isolation for edge and IoT applications: In the camera function described above, hacking the video streaming microservice on one app would not affect other streaming services, the app, or any other system.


Companies may be fooling themselves that they are GDPR compliant

A closer look at the steps taken by many of these companies reveals a GDPR strategy that it is only skin deep and fails to identify, monitor or delete all of the Personally Identifiable Information (PII) data they have stored. Such a shallow approach presents significant risks, as these businesses may be oblivious to much of the PII data that they hold and would have difficulty finding and deleting it, if requested to do so. They would also be unable to provide the regulatory authorities with GDPR-mandated information about data implicated in a breach within 72 hours of its discover—another GDPR requirement. To address these risks, companies need a holistic strategy to manage their data—one that automates the process of profiling, indexing, discovering, monitoring, moving and deleting all of their data as necessary, even if it’s unstructured or perceived to be low-risk. This will significantly reduce their GDPR and other regulatory compliance risks, while simultaneously allowing them to make greater use of the data in ways that create business value.


Banks lead in digital era fraud detection


Banks have recognised the need to have an omni-channel view of the different interactions and do their fraud risk assessments across the various channels, he said, because not only are cyber fraudsters working across multiple channels, but so do ordinary consumers, starting something on a laptop, continuing it on the phone and perhaps completing it through a virtual assistant while travelling in a car. “This is true for all consumer-facing businesses that have different channels through with they interact with consumers, and they should follow the banks’ lead and adopt an omni-channel approach to doing their risk profiling and gain from the visibility you have in each of the channels,” said Cohen. “In enterprise security, we were talking about breaking down channels years ago, and now we are starting to talk about it in the context of fraud, so that fraud assessments are carried out in the light of what is going on across all the available channels of interaction, especially as interactions become increasingly through third parties.”


Over 9 out of 10 people are ready to take orders from robots

Perhaps organisations are not doing enough to prepare the workforce for AI. Almost all (90 percent) of HR leaders and over half of employees (51 percent) reported that they are concerned they will not be able to adjust to the rapid adoption of AI as part of their job, and are not empowered to address an emerging AI skill gap in their organization. Almost three quarters (72 percent) of HR leaders noted that their organization does not provide any form of AI training program. Other major barriers to AI adoption in the enterprise are: Cost (74 percent), failure of technology (69 percent), and security risks (56 percent). But a failure to adopt AI will have negative consequences too. Almost four out of five (79 percent) HR leaders and 60 percent of employees believe that it will impact their careers, colleagues, and overall organization Emily He, SVP of Human Capital Management Cloud Business Group at Oracle, said: "To help employees embrace AI, organizations should partner with their HR leaders to address the skill gap and focus their IT strategy on embedding simple and powerful AI innovations into existing business processes."


Network capacity planning in the age of unpredictable workloads

Data center interconnect
When an application or a component of a distributed application moves or scales up, it needs a new IP address and capacity to route traffic to that new address. Every decision around workload portability and elasticity generates traffic on the data center network and the cloud gateway(s) involved. A workload's address determines how workflows through it connect, which defines the pathways and where to focus network capacity plans. To plan realistic capacity requirements, formal network engineers dive into the complex math of the Erlang B formula, and if you are inclined to learn it, check out the older book James Martin's Systems Analysis for Data Transmission. However, there are also easier rules of thumb. As a connection congests, it increases the risk of delay and packet loss in a nonlinear fashion. This tenet contributes to network capacity planning fundamentals. Problems ramp up slowly until the network reaches about 50% utilization; issues rise rapidly after that threshold.


Data recovery do's and don'ts for IT teams

data-recovery
Many, but not all, modern backup applications perform bit-by-bit checks to ensure the data being read from primary storage does in fact match the data being written to backup storage. Add that to your backup software shopping list, Verma advised. "The other thing I've noticed the industry's heading over to in the last couple of years is away from this notion of always having to do a full restore," Verma said. For example, you could restore a single virtual machine or an individual database table, rather than a whole server or the entire database itself. That's easy to forget when under pressure from angry users who want their data back immediately. "The days of doing a full recovery are gone, if you will. It's get me to what I need as quickly as possible," Verma said. "I would say that's the state of the industry." A virtual machine can actually be booted directly from its backup disk, which is useful for checking to see if the necessary data is there but not very realistic for a large-scale recovery, Verma added.


Web Application Security Thoughts

Web application security is a branch of Information Security that deals specifically with security of websites, web applications and web services. At a high level, Web application security draws on the principles of application security but applies them specifically to Internet and Web systems. Web application should follow a system for security testing. ... The company has to decide or project owner has to decide which remediation will take effective solution for the application. Because each application has different purpose and user groups. For financial application, you have to be more careful about the transaction and money stored in the database. So here, application and database both security is important. If we deal with card-data, then we must meet the PCI recommended guideline to mitigate or prevent fraud. In this case, owasp Top 10 vulnerabilities should be maintained properly. To protect the application now a days, not only you have to depend on only the application. You also have to depend on PCI recommended next generation firewall or waf. The next generation firewall will do the following things for an Web Application.


DDoS attackers increasingly strike outside of normal business hours

DDoS attacks outside business hours
While attack volumes increased, researchers recorded a 36% decrease in the overall number of attacks. There was a total of 9,325 attacks during the quarter: an average of 102 attacks per day. While the number of attacks decreased overall – possibly as a result of DDoS-as-a-service website Webstresser being closed down following an international police operation, both the scale and complexity of the attacks increased. The LSOC registered a 50% increase in hyper-scale attacks (80 Gbps+). The most complex attacks seen used 13 vectors in total. Link11’s Q2 DDoS Report revealed that threat actors targeted organisations most frequently between 4pm CET and midnight Saturday through to Monday, with businesses in the e-commerce, gaming, IT hosting, finance, and entertainment/media sectors being the most affected. The report reveals that high volume attacks were ramped up via Memcached reflection, SSDP reflection and CLDAP, with the peak attack bandwidth recorded at 156 Gbps.


AIOps platforms delve deeper into root cause analysis


The differences lie in the AIOps platforms' deployment architectures and infrastructure focus, said Nancy Gohring, an analyst with 451 Research who specializes in IT monitoring tools and wrote a white paper that analyzes FixStream's approach. "Dynatrace and AppDynamics use an agent on every host that collects app-level information, including code-level details," Gohring said. "FixStream uses data collectors that are deployed once per data center, which means they are more similar to network performance monitoring tools that offer insights into network, storage and compute instead of application performance." FixStream integrates with both Dynatrace and AppDynamics to join its infrastructure data to the APM data those vendors collect. Its strongest differentiation is in the way it digests all that data into easily readable reports for senior IT leaders, Gohring said. "It ties business processes and SLAs [service-level agreements] to the performance of both apps and infrastructure," she said.



Quote for the day:


"It is a terrible thing to look over your shoulder when you are trying to lead and find no one there." -- Franklin D. Roosevelt