Showing posts with label DRaaS. Show all posts
Showing posts with label DRaaS. Show all posts

Daily Tech Digest - September 24, 2021

Chef Shifts to Policy as Code, Debuts SaaS Offering

As for ease of use, Chef Enterprise Automation Stack (EAS) will also be available in both AWS and Azure marketplaces. The company has begun a Chef Managed Services program, and Chef EAS is also now available in a beta SaaS offering. All of these together, said Nanjundappa, will make Chef EAS “easy to access and adopt, which will help reduce overall time to value.” Looking forward, Nanjundappa said that the focus will include features like cloud security posture management (CSPM) and Kubernetes security. “We are seeing more and more compute workloads being migrated towards containers and Kubernetes. We currently offer Chef Inspec + content for CIS profiles for K8s and Docker that help secure Containers and Kubernetes,” wrote Nanjundappa. “But we will be adding additional abilities to maintain security posture in containers and Kubernetes platforms in the coming years.” More specifically, upcoming Kubernetes features will offer visibility into containers and the Kubernetes environment, scanning for common misconfigurations, vulnerability management, and runtime security.


Private vs. Public Blockchains For Enterprise Business Solutions

Not all blockchains are created equal. Businesses have always required a reasonable degree of privacy as well as control over their networks. Since the popularisation of the internet, and the advance of eCommerce, it’s been essential that companies protect their systems from outside attackers, both to preserve their workflow but also any sensitive information they might be storing. Hence, as blockchain technology becomes integrated into the modern digital workplace, it is only logical that private networks are often seen as preferable for many organizations. This is no big surprise — especially given that some of the main selling points of blockchain include a completely transparent ledger containing all data as well as the ability to move value around. And it’s clear why a business wouldn’t want just anyone to be able to access their internal network. This way, the company gets many of the benefits of the novel tech but can remain opaque to most of the world. It’s also quite valid that private blockchains are typically much more efficient than public ones. 


10 top API security testing tools

Many organizations likely don’t know how many APIs they are using, what tasks they are performing, or how high a permission level they hold. Then there is the question of whether those APIs contain any vulnerabilities. Industry and private groups have come up with API testing tools and platforms to help answer those questions. Some testing tools are designed to perform a single function, like mapping why specific Docker APIs are improperly configured. Others take a more holistic approach to an entire network, searching for APIs and then providing information about what they do and why they might be vulnerable or over-permissioned. Several well-known commercial API testing platforms are available as well as a large pool of free or low-cost open-source tools. The commercial tools generally have more support options and may be able to be deployed remotely though the cloud or even as a service. Some open-source tools may be just as good and have the backing of the community of users who created them. Which one you select depends on your needs, the security expertise of your IT teams, and budget.


Implementing risk quantification into an existing GRC program

How do risk professionals quantify risk? Using dollars and cents. Taking the information gathered in the Open FAIR model simulations, risk quantification further breaks down primary and secondary losses into six different types for each loss, allowing the organization to determine how best to categorize them. CISOs and other risk professionals can consider data points from the market, their data and additional available information. They can classify each type of data they’re inputting as high or low confidence. Primary loss equals anything that’s a direct loss to the company due to a specific event. Secondary loss includes something which may or may not occur, like reputational damage or potential lost revenue. Risk quantification also enables risk professionals to communicate risk to leaders and other stakeholders in a shared language everyone understands: dollars and cents. Quantifying risk in financial terms enables organizations to assess where their biggest loss exposures may be, conduct cost-benefit analyses for those initiatives designed to improve risk activities, and prioritize those risk mitigation activities based on their impact to the business.


The Architecture of a Web 3.0 application

Unlike Web 2.0 applications like Medium, Web 3.0 eliminates the middle man. There’s no centralized database that stores the application state, and there’s no centralized web server where the backend logic resides. Instead, you can leverage blockchain to build apps on a decentralized state machine that’s maintained by anonymous nodes on the internet. By “state machine,” I mean a machine that maintains some given program state and future states allowed on that machine. Blockchains are state machines that are instantiated with some genesis state and have very strict rules (i.e., consensus) that define how that state can transition. Better yet, no single entity controls this decentralized state machine — it is collectively maintained by everyone in the network. And what about a backend server? Instead of how Medium’s backend was controlled, in Web 3.0 you can write smart contracts that define the logic of your applications and deploy them onto the decentralized state machine. This means that every person who wants to build a blockchain application deploys their code on this shared state machine.


A Major Advance in Computing Solves a Complex Math Problem 1 Million Times Faster

That's an exciting development when it comes to tackling the most complex computational challenges, from predicting the way the weather is going to turn, to modeling the flow of fluids through a particular space. Such problems are what this type of resource-intensive computing was developed to take on; now, the latest innovations are going to make it even more useful. The team behind this new study is calling it the next generation of reservoir computing. "We can perform very complex information processing tasks in a fraction of the time using much less computer resources compared to what reservoir computing can currently do," says physicist Daniel Gauthier, from The Ohio State University. "And reservoir computing was already a significant improvement on what was previously possible." Reservoir computing builds on the idea of neural networks – machine learning systems based on the way living brains function – that are trained to spot patterns in a vast amount of data.


Enterprise data management: the rise of AI-powered machine vision

The process of training machine learning algorithms is dramatically hindered for firms acquiring and centralising petabytes of unstructured data – whether video, picture, or sensor data. The AI development pipeline and production model tweaking are both delayed as a result of this centralised data processing method. In an industrial setting, this could result in product faults being overlooked, causing considerable financial loss or even putting lives in peril. Recently, distributed, decentralised architectures have become the preferred choice among businesses, resulting in most data being kept and processed at the edge to overcome the delay and latency challenges and address issues associated with data processing speeds. Deployment of edge analytics and federated machine learning technologies is bringing notable benefits while tackling the inherent security and privacy deficiencies of centralised systems. Take, for example, a large-scale surveillance network that continuously records video. Instead of focusing on hours of film of an empty building or street, effectively training an ML model to differentiate between certain items needs the model to assess footage in which anything new is observed.


The evolution of DRaaS

In the days in which DRaaS was born, it was not unusual for companies to maintain duplicate sets of hardware in an off-site location. Yes, they could replicate the data from their production site to the off-site location, but the expense of procuring and maintaining the secondary site was prohibitive. This led many to use the secondary location for old and retired hardware or even to use less powerful computer systems and less efficient storage to save money. DRaaS is essentially DR delivered as a service. Expert third-party providers either delivered tools or services, or both, to enable organizations to replicate their workloads to data centers managed by those providers. This cloud-based model allowed for increased agility than previous iterations of DR could easily allow, empowering businesses to run in a geographically different location as close to normal as possible while the original site was made ready for operations again. And technology improvements over the course of the 2010s only made the failover and failback process more seamless and granular.


JLL CIO: Hybrid Work, AI, and a Data and Tech Revolution

Offices typically offer multiple services, Wagoner explains. For instance, someone puts the paper in the printers. Someone helps employees with laptop problems. Someone runs the on-site cafeteria. Someone maintains the temperature and air quality of the office. As an employee, if there’s an issue, you need to go to a different group for each one of these different issues. However, JLL’s vision is to remove that friction and collect all those services into a single interface experience app for employees. “With the experience app, we eliminate you having to know that you need to go to office services for one thing and then remember the URL for the IT help desk for another thing,” Wagoner says. “We don’t even necessarily replace any of the existing technology. We just give the end user a much better, easier experience to get to what they need.” This experience app is called “Jet,” and it also can inform workers of rules for particular buildings during the pandemic. For instance, if you book a desk in a building or as you approach a building it might tell you if that building has a vaccine requirement or a masking requirement.


Intel: Under attack, fighting back on many fronts

Each processor architecture has strengths and weaknesses, and all are better or best suited to specific use cases. Intel’s XPU project, announced last year, seeks to offer a unified programming model for all types of processor architectures and match every application to its optimal architecture. XPU means you can have x86, FPGA, AI and machine-language processors, and GPUs all mixed into your network, and the app is compiled to the best suited processor for the job. That is done through the oneAPI project, which goes hand-in-hand with XPU. XPU is the silicon part, while oneAPI is the software that ties it all together. oneAPI is a heterogeneous programming model with code written in common languages such as C, C++, Fortran, and Python, and standards such as MPI and OpenMP. The oneAPI Base Toolkit includes compilers, performance libraries, analysis and debug tools for general purpose computing, HPC, and AI. It also provides a compatibility tool that aids in migrating code written in Nvidia’s CUDA to Data Parallel C++ (DPC++), the language of Intel’s GPU.



Quote for the day:

"Don't measure yourself by what you have accomplished. But by what you should have accomplished with your ability." -- John Wooden

Daily Tech Digest - January 16, 2020

How to get started with CI/CD

How to get started with CI/CD
Continuous integration and continuous delivery require continuous testing, because the goal is to deliver high quality and secure applications and code to end users. Continuous testing is often deployed as a set of automated regression, performance, and other tests that are executed within the pipeline. CI and CD together (CI/CD) encompass a culture, a set of operating principles, and a collection of practices that accelerate the software development process. The implementation is also known as the CI/CD pipeline and is considered one of the best practices for devops teams. Industry experts say more organizations are implementing CI/CD as they look to enhance the design, development, and delivery of software applications to be used internally or by customers. “We’re definitely seeing a rise in the use of CI/CD,” says Sean Kenefick, vice president and analyst at research firm Gartner. “I personally get questions about continuous development, testing, and release all of the time.”



Beware of this sneaky phishing technique now being used in more attacks


Cyber criminals are leaning hard on this attack technique as a means of compromising businesses, according to new research from Barracuda Networks. Analysis of 500,000 emails showed that conversation hijacking rose by over 400% between July and November last year. While conversation-hijacking attacks are still relatively rare, the personal nature means they're difficult to detect, are effective and potentially very costly to organisations that fall victim to campaigns. For cyber criminals conducting conversation-hijacking attacks, the effort involved is much greater than simply spamming out phishing emails in the hope that a target clicks, but a successful attack can potentially be highly rewarding. In most cases, the attackers won't directly use the compromised account to send the malicious phishing message – because the user could notice that their outbox contains an email that they didn't send. However, what conversation hijackers do instead is attempt to impersonate domains, using techniques like typo-squatting – when a URL is the same as the target company, save for one or two slightly altered changes.


11 Golden Rules For Android App Development


One of the golden rules of the Android Application Development includes Responsive User Interface. It engages the users into highly-intuitive apps that enhance their experience as well as cater to their requirements. Also, it is built by setting the viewpoint right by fixing the width so that everything in the screen can be adjustable according to the screen size. Moreover, the additional elements such as images, videos, or frames should be organized in such a way that it best fit in all types of screen sizes. ... Prototypes can be the right choice for showcasing the power of different technologies. In the world of digitalization, nobody would like to read the article but will surely love the digital presentation. After you identify the approach, you should build the prototype with basic functionalities and present it to the potential buyers so that they can understand the benefits of it. The prototype would help in attracting potential customers as they will be able to use the live project and would better understand the scope of the project.


Introduction to Gaps and Islands Analysis

One of the most significant challenges we face when analyzing data is pattern recognition. We seek to find ways in which our data deviates from the norm or conforms to a given norm. The goal is to identify tools that can be used to predict future behavior and make sense out of large volumes of data. Understanding boundaries and where a pattern begins or ends allows us to draw meaningful conclusions regarding our data. In terms of data, boundaries are more often seen as gaps or islands within any data set. Being able to efficiently locate gaps and islands enables us to use this data to gain meaningful insight into a system. We can identify winning and losing streaks, measure the strength of a system over time, find missing or duplicate data, and a variety of other interesting metrics. Within a data set, an island of data is any ordered sequence where each row is in close proximity to the rows around it. For some data types and analysis, “close proximity” will mean consecutive.


The Flutter Architecture


The Flutter SDK allows you to build Android, iOS, web, and desktop apps from a single codebase. This is done using platform-specific features as well as media queries, and it enables developers to ship applications faster. Flutter also offers close- to-instant feedback with the hot reload feature, enabling you to iterate quickly on your application. In this piece, we’ll cover the fundamental concepts you need in order to start working with Flutter. Flutter’s core technologies are Dart— a programming language developed by Google—and Skia — a 2D graphics rendering library. The language has been optimized for building user interfaces. This makes it a good fit for the Flutter framework. The language is fairly easy to pick up, especially if you have a background in JavaScript and object-oriented programming generally. In Flutter, you define your user interface using widgets. In fact, everything in Flutter is a widget. Your application itself is a widget made up of several sub-widgets. All the widgets form what is known as a widget tree.


Diligent Engine: A Modern Cross-Platform Low-Level Graphics Library

Graphics APIs have come a long way from a small set of basic commands allowing limited control of configurable stages of early 3D accelerators to very low-level programming interfaces exposing almost every aspect of the underlying graphics hardware. The next-generation APIs, Direct3D12 by Microsoft and Vulkan by Khronos are relatively new and have only started getting widespread adoption and support from hardware vendors, while Direct3D11 and OpenGL are still considered industry standard. New APIs can provide substantial performance and functional improvements, but may not be supported by older platforms. An application targeting wide range of platforms has to support Direct3D11 and OpenGL. New APIs will not give any advantage when used with old paradigms. It is totally possible to add Direct3D12 support to an existing renderer by implementing Direct3D11 interface through Direct3D12, but this will give zero benefits.


Tolerable security risk is a spectrum

Tolerable security risk is a spectrum
All enterprises are different. Each company stores and manages different types of data sets. They have different applications and processes in place. The ones in specific industries, such as healthcare and finance, have compliance restrictions that can be a nightmare. The notion is simple. Everyone has different security needs, and differences in data they are protecting. Thus, they should be on different parts of the security spectrum. For instance, in my earlier example, if the breached company were a tire manufacturer, spending four times the previous year’s security budget may be overspending, or not aligning with where it sits on the spectrum—just being reactionary. Yes, I’m making sweeping generalizations. Most tire manufacturers don’t deal with personally identifiable information the way that healthcare organizations do. Nor do they have to keep up with stringent auditable logging, as is required by most banks. Moreover, the data is probably fairly innocuous considering that the database information is about customers that are just a bunch of tire retailers—data that could be easily found on the website. Also, they don’t pay with credit cards, so none of that information is stored


Web developers: Microsoft Blazor lets you build native iOS, Android apps in C#, .NET

Microsoft announced Blazor in early 2018 but still considers Blazor an experimental web UI framework from ASP.NET that aims to bring .NET applications to all browsers via WebAssembly.  "It allows you to build true full-stack .NET applications, sharing code across server and client, with no need for transpilation or plugins," Microsoft explains. Microsoft is experimenting with Blazor and Mobile Blazor Bindings to cater to developers who are familiar with web programming and "web-specific patterns" to create native mobile apps. The idea behind releasing the mobile bindings now is to see whether these developers would like to use the "Blazor-style programming model with Razor syntax and features" as opposed to using XAML and Xamain.Forms. However, the underlying UI components of Mobile Blazor Bindings are based on Xamarin.Forms. If the feedback is positive, Microsoft may end up including it in a future version of Visual Studio, according to Lipton.


'Cable Haunt' Modem Flaw Leaves 200 Million Devices at Risk  

'Cable Haunt' Modem Flaw Leaves 200 Million Devices at Risk
The research team has dubbed such attacks Cable Haunt and says "an estimated 200 million cable modems in Europe alone" are at risk. They say every cable modem they have tested has been at risk, although some internet service providers have now developed and deployed firmware that mitigates the problem. Broadcom says it issued updated firmware code to fix the flaw eight months ago. "We have made the relevant fix to the reference code and this fix was made available to customers in May 2019," a spokeswoman tells Information Security Media Group. Service providers who have issued a patch will have based it on Broadcom's code updates. The vulnerability, originally codenamed "Graffiti," was discovered and has been disclosed by Alexander Dalsgaard Krog, Jens Hegner Stærmose and Kasper Kohsel Terndrup of Danish cybersecurity consultancy Lyrebirds, together with independent security researcher Simon Vandel Sillesen. Has the flaw been abused by attackers in the wild? "Maybe," the researchers write on the Cable Haunt site.


DRaaS decisions: Key choices in disaster recovery as a service


Self-service DRaaS involves the customer planning, buying, configuring, maintaining and testing disaster recovery services. And, although options for automation are improving, the IT team will typically need to be available to invoke the DR plan and run the recovery process. The benefits are flexibility and often cost. The business can choose exactly which mix of recovery services, backup and recovery software, and even the raw storage, it needs. A self-service model can lend itself to mixed environments, with multiple cloud data stores and application-based availability and DR tools. ... Managed DRaaS is the most comprehensive, but also the most expensive, option. The main benefit is that in-house IT teams can hand off DR operations entirely to the third party. This reduces the burden on skilled staff. And, although a managed service is typically more expensive than other DR options, it can be money well spent for a comprehensive service and peace of mind.



Quote for the day:


"The speed of the leader is the speed of the gang." -- Mary Kay Ash


Daily Tech Digest - July 30, 2019

What to Look Out For When Selecting a DRaaS Provider

What to Look Out For When Selecting a DRaaS Provider
Before exploring DRaaS, your organization should have a business impact analysis. In performing a current business impact analysis, you will be able to posit what would happen in the event of a disaster or disruption of business operations. ... When picking which DRaaS provider is right for you, use this information to determine if providers can accommodate your needs. After figuring out what your disaster recovery requirements are exactly, you can ask questions of providers in order to ascertain if they can support your needs. In the event that you were to experience data loss or corruption, learn the procedures of the providers in that situation by asking questions such as: How many copies of your backups are available? Where are those backups located? Is the provider able to recreate an image of your data at a specific, previous point from available backups? In calendar terms, how far back are backups accessible? What is the provider’s protocol when you perform a failover to DRaaS and are ready to go back to your standard environment afterward?


Google researchers disclose vulnerabilities for 'interactionless' iOS attacks

iPhone iOS
According to the researcher, four of the six security bugs can lead to the execution of malicious code on a remote iOS device, with no user interaction needed. All an attacker needs to do is to send a malformed message to a victim's phone, and the malicious code will execute once the user opens and views the received item. The four bugs are CVE-2019-8641 (details kept private), CVE-2019-8647, CVE-2019-8660, and CVE-2019-8662. The linked bug reports contain technical details about each bug, but also proof-of-concept code that can be used to craft exploits. The fifth and sixth bugs, CVE-2019-8624 and CVE-2019-8646, can allow an attacker to leak data from a device's memory and read files off a remote device --also with no user interaction. While it is always a good idea to install security updates as soon as they become available, the availability of proof-of-concept code means users should install the iOS 12.4 release with no further delay.



Top 5 financial services processes that are ripe for automation


Barely a day goes by without the launch of a new report extolling the potential benefits of artificial intelligence (AI) and automation in the financial services industry. These reports often refer to the potential for cost reduction, increased operational efficiency, improved customer experience and, ultimately, bottom-line growth. Indeed, analysts predict that AI will deliver a 22 percent reduction in operating costs (a saving of more than $1trn) across the global financial services industry by 2030 as business leaders look to transform both front and back-office functions. Demand for AI is coming from both ends of the market: established banks are recognising the need to respond to huge sector-wide disruption and to develop more agile operations in order to compete, while smaller fintech firms are looking to AI and automation as a way to scale quickly while keeping costs down. The scale of the opportunity is so vast that it can sometimes be a challenge for banks and insurance firms to know where to start or how to identify the process automations that will deliver most value.


Avoid chaos with an IT crisis management playbook


The second significant component of an IT crisis management playbook is a breakdown of common or reoccurring issues and their suggested fixes. Append the top resolution suggestions from the application vendors as well. Don't expect to create an exhaustive list, but describe coverage for five to 10 of the business's most critical applications. Create a comprehensive index for both vendors and IT operations staff to see quickly if they need to escalate an issue -- and to whom -- with internal contact information attached. A common question about crisis management playbooks is recommended format: paper or digital? If the modern paperless office is any indicator, create both. Paper binders require effort to update and store, but they also work without power -- something that's not a guarantee with a digital version.


Hackers target Telegram accounts through voicemail backdoor


According to the testimony of one of the arrested suspects, Walter Delgatti Neto, there’s another, potentially more vulnerable, way to get those verification messages – via voicemail. Accessing voicemail boxes turns out to be easier than it should be. Some people forget to set four-digit codes and those that don’t can potentially be undone by crooks cycling through the 10,000 possibilities. Many voicemail systems fight back by checking that the number making an access call belongs to the subscriber, but these numbers can easily be spoofed if the attacker knows the correct number. If an attacker can access voicemail they can access verification messages, such as Telegram’s, which are sent to voicemail if the hacker’s target is on a call or doesn’t answer three times in a row. Apparently, news of the weakness has spread on forums, leading to leaks of attacks on other valuable targets, including Puerto Rico Governor Ricardo Roselló, whose position became untenable after his Telegram chats were recently leaked.


Strategy For and With AI

Our research strongly suggests that in a machine learning era, enterprise strategy is defined by the key performance indicators (KPIs) leaders choose to optimize. (See “About the Analysis.”) These KPIs can be customer centric or cost driven, process specific or investor oriented. These are the measures organizations use to create value, accountability, and competitive advantage. Bluntly: Leadership teams that can’t clearly identify and justify their strategic KPI portfolios have no strategy. In data-rich, digitally instrumented, and algorithmically informed markets, AI plays a critical role in determining what KPIs are measured, how they are measured, and how best to optimize them. Optimizing carefully selected KPIs becomes AI’s strategic purpose. Understanding the business value of optimization is key to aligning and integrating strategies for and with AI and machine learning. KPIs create accountability for optimizing strategic aspirations. Strategic KPIs are what smart machines learn to optimize. 


The Case For Transforming Banking (Even When Profits Are Strong)


Many financial institutions are saying the right things more than doing what is needed. Often, what is being done is in the context of banking from the past, as opposed to being recreated from the bottom up as you would if you were building a digital banking organization from scratch. And many of these initiatives are still moving at a snails pace. Organizations are building digital account opening, loan application and new customer onboarding processes, but the majority of these processes still require the consumer to come into the branch or have way too many steps similar to the paper-based processes of the past. And, while almost all organizations know the benefits of expanded data, advanced analytics and AI, very few have used these tools to personalize experiences or proactively offer solutions in real time. As stated in the BCG report, banking organizations must look at digital transformation in a holistic manner rather than fragmented components that are not seamlessly integrated. More importantly, the direction for this transformation must come from the organization’s senior leadership and be supported by a culture that encourages innovation, digital customer experiences and aggressive market positioning.


Cyber security leadership in the age of fast and continuous delivery


Addressing the need for agile methods and the need to sustain adequate cyber security presents certain challenges for the CISO navigating a transforming business landscape. Here are the top six key triggers and challenges organizations are grappling with today. ... Cyber security usually has predefined contact points within a team's detailed planning and work schedule. These typically occur during initial software architecture definition and validation, with a couple of checkpoints ending with late testing and acceptance of the solution. Today, modern application security replaces the typically predefined interactions in the software lifecycle with more frequent interations that increase dialogue, collaboration and efficiency. How do organizations re-organize cyber security to support this interaction, either through staffing, automation or clever methodological work-arounds? ... It's not rare today to see cyber departments hiring software developers possessing a strong understanding of modern dynamics and training them in cyber security.


Capital One’s breach was inevitable, because we did nothing after Equifax

capital one blue ribbon companies 2016 gettyimages 617684580
This time it’s the financial giant and credit card issuer Capital One, which revealed on Monday a credit file breach affecting 100 million Americans and 6 million Canadians. Consumers and small businesses affected are those who obtained one of the company’s credit cards dating back to 2005. That includes names, addresses, phone numbers, dates of birth, self-reported income and more credit card application data — including over 140,000 Social Security numbers in the U.S., and more than a million in Canada. The FBI already has a suspect in custody. Seattle resident and software developer Paige A. Thompson, 33, was arrested and detained pending trial. She’s been accused of stealing data by breaching a web application firewall, which was supposed to protect it. Sound familiar? It should. Just last week, credit rating giant Equifax settled for more than $575 million over a date breach it had — and hid from the public for several months — two years prior. Why should we be surprised? Equifax faced zero fallout until its eventual fine. All talk, much bluster, but otherwise little action.


Is The Future Of Artificial Intelligence Tied To The Future Of Blockchain?

uncaptioned
There is no doubt that blockchain is a disruptive technology and will give nations and all its components the foundation to the decentralized future. While blockchain is a disruptive technology, the way it is being used and applied has enormous energy and environmental impacts. The reason behind this is the process that is at the core of blockchain systems. The security of blockchain technology comes from its encryption, and the consensus mechanism of blockchain necessitates that all users require permission to write on the chain. Each of these requirements individually and collectively involves the intricate use of algorithms and enormous amounts of computing power. As the computing power needed to keep the current applications of blockchain running is not sustainable, it is one of the critical challenges facing the future of blockchain. It is not only blockchain and artificial intelligence, but all existing and emerging technologies, that are accelerating global computing power consumption. As a result, there is a visible need for increased computing power.



Quote for the day:


"Leaders stuck in old cow paths are destined to repeat the same mistakes. Change leaders recognize the need to avoid old paths, old ideas and old plans." -- Reed Markham


April 15, 2014

DRaaS pricing lifts the burden of backup responsibilities
Disaster recovery is a topic as old as data centers themselves, but emerging technologies and applications are giving it new life. In particular, disaster recovery as a service, based in the cloud, enables small and medium-sized businesses (SMBs) to protect their IT infrastructure without breaking the bank. That's the focus of this month's Modern Infrastructure cover story, which explores the benefits of DR in the cloud, or DRaaS. DR sites used to be reserved for only deep-pocketed companies and IT teams, but the cloud has been a great equalizer when it comes to disaster recovery.


Large Scale Scrum (LeSS) @ J.P. Morgan
Before the adoption of LeSS the teams in Securities were under mandate to adopt certain core building block components. For example all datastore interaction utilised an internal proprietary framework which abstracted the application tier from datastore specific functionality. This API layer was private code owned by a central team. The result was that if any team found a bug or needed a change they would need to persuade the central team to prioritise the work and wait (often, a long time) for the next release cycle. But, after adopting LeSS with feature teams and a more internal open source or collective code ownership approach, a more progressive stance was adopted.


Boom time for digital technologies as CEOs make IT investment top priority for 2014
"If you look at that period from 2003 - 2008, the five year economic boom period before we have a crash, at that point the talk was about offshoring, outsourcing and ERP standardisation projects. In that boom period IT in the business was generally being kept under control, put a lid on, even cut. "There was a sense that IT was a hygiene factor. That you needed to have it but it wasn't differentiating. People had bought into the idea that IT was something of a commodity, that's why we did all that offshoring and outsourcing.


Making room for risk in high-performing companies
Chobani, a relative newcomer in the yogurt industry, is a prime example of differentiation through disruption. One of Chobani’s innovations is a manufacturing process that involves recycling a whey byproduct as supplemental feed for its local farms. This helps foster sustainability as part of a commitment to the environment and the communities Chobani serves. Over time, many growing enterprises will seek to derive more value from their existing systems. This is where the process improvement journey begins. But once those processes are in place, many businesses lose room to maneuver.


Developer Details How He Built Software-Defined Networking App
Pearce, a veteran of 20 years of programming communications and networking technology, has primarily used C++ and C and admitted he didn't have a lot of experience with Java, required for the SDN programming. Pearce particularly noted he had some difficulty using the Maven project management tool, with which he had little experience. He encountered many challenges along the way, he said, but was able to produce a functioning example app on time, with help from some friends more experienced in the technology to smooth over the rough spots.


Farm machines produce privacy concerns, guidelines underway
"Virtually every company says it will never share, sell or use the data in a market-distorting way--but we would rather verify than trust," farmer Brian Marshall of the AFBF told the U.S. House Committee on Small Business in February (as reported in a post in AgProfessional). "The data would be a gold mine to traders in commodity markets and could influence farmland values," writes Karl Plume at Reuters. "While there are no documented instances so far of data being misused, lengthy contracts packed with open-ended language and differing from one supplier to the next are fueling mistrust."


Why Your Resident Loudmouth is a Big Asset
Expressive employees are your best secret weapon. They are natural leaders and passionate about improvement. So, enlist their help. Put them in charge of committees, seek their advice, and use their insights to make your company better. You will probably find that they start becoming less of a loudmouth as you treat them differently. After all, the best way to make someone stop pushing so hard is to remove the force of resistance. While opinionated and confident employees’ methods can sometimes be problematic, their intentions are often good.


New cloud service uses big data sources to improve emergency response
A platform like TIES can help to make the escalating explosion of online information more useful, Dodge said. "The problem with intelligence is that, 10 years ago, there wasn't enough to make good decisions. Now there is too much information," he said, adding that TIES allows users to take data, pull it into one location and then act on it. "What would have once taken hours and multiple people sorting through multiple sources to find vital information can now be done by a single analyst to put together a security or response plan to address top threats," he said.


USB Type-C: Simpler, faster and more powerful
In fact, the upcoming Type-C plug just might end up being the one plug to rule them all: A single USB connector that links everything from a PC's keyboard and mouse to external storage devices and displays. "The Type-C plug is a big step forward," says Jeff Ravencraft, chairman of the USB Implementers Forum (USB-IF), the organization that oversees the USB standard. "It might be confusing at first during the transition, but the Type-C plug could greatly simplify things over time by consolidating and replacing the larger USB connectors."


SparkCognition: Let machines address security threats
According to Husain, the MindSpark platform is built on patent-pending Pattern Recognition and Machine Learning techniques that enable cognitive capability. He pointed out that MindSpark — when exposed to security data — finds patterns of attack, identifies vectors, models attacker behavior, and much more. Husain also said that MindSpark aggregates its learning at a faster pace than any human or legacy software system. What it learns — the statistics models and base operational data — is offered as a cloud service.



Quote for the day:

"Work like you don't need the money. Love like you've never been hurt. Dance like nobody's watching." -- Satchel Paige