Showing posts with label produtivity. Show all posts
Showing posts with label produtivity. Show all posts

Daily Tech Digest - February 26, 2026


Quote for the day:

"It is not such a fierce something to lead once you see your leadership as part of God's overall plan for his world." -- Calvin Miller



Boards don’t need cyber metrics — they need risk signals

Decision-makers want to know whether risk is increasing or decreasing, whether controls are effective, and whether the organization can limit damage when prevention fails. Metrics are therefore useful when they clarify those questions. “Time is really the universal metric because everyone can understand time,” Richard Bejtlich, strategist and author in residence at Corelight, tells CSO. “How fast do we detect problems, and how fast do we contain them. Dwell time, containment time. That’s the whole game for me.” Organizations cannot prevent every intrusion, Bejtlich argues, but they can measure how quickly they recognize and contain one. ... Wendy Nather, a longtime CISO who is now an advisor at EPSD, cautions against equating measurement with understanding. “When you are reporting to the board, there are some things you just cannot count that you have to report anyway,” she tells CSO. She points to incidents, near misses, and changes in assumptions as examples. “Anything that changes your assumptions about how you’re managing your security program, you should be bringing those to the board, even if you can’t count them,” Nather says. Regular metrics can create a rhythm of predictability, and that predictability could lull board members into a false sense of security. “Metrics are very seductive,” she says. “They lead us toward things that can be counted, that happen on a regular basis.” The result may be a steady flow of data that obscures structural risk or emerging weaknesses, Nather warns. 


The Enterprise AI Postmortem Playbook: Diagnosing Failures at the Data Layer

Your first rule of the playbook is to treat AI incidents as data incidents – until proven otherwise. You should start by tagging the failure type. Document whether it’s a structure issue, retrieval misalignment, conflict with metric definition, or other categories. Ideally, you want to assign the issue to an owner and attach evidence to force some discipline into the review. Try to classify the issue into clearly defined buckets. For example, you can classify into these four buckets: structural failure, retrieval misalignment, definition conflict, or freshness failure. Once this part is clear, the investigation becomes more focused. The goal with this step is to isolate the data fault line. ... The next step is to move one layer deeper. Identify the source table behind the retrieved context. You also want to confirm the timestamp of the last refresh. Check whether any ingestion jobs failed, partially completed, or ran late. Silent failures are common. A job may succeed technically while loading incomplete data. As you go through the playbook continue tracing upstream. Find the transformation job that shaped the dataset. Look at recent schema changes. Check whether any business rules were updated. The idea here is to rebuild the exact path that led to the output. Try to not make any assumptions at this stage about model behavior – simply keep tracing until the process is complete. Don’t be surprised if the model simply worked with what it was given.


Top Attacks On Biometric Systems (And How To Defend Against Them)

Presentation attacks, often referred to as spoofing attacks, occur when an attacker presents a fake biometric sample to a sensor (like a camera or microphone) in an attempt to impersonate a legitimate user. Common examples include printed photos, video replays, silicone masks, prosthetics or synthetic fingerprints. More recently, high-quality deepfake videos have become a powerful new tool in the attacker’s arsenal. ... Passive liveness techniques, which analyze subtle physiological and behavioral signals without requiring user interaction, are particularly effective because they reduce friction while improving security. However, liveness detection must be resilient to unknown attack methods, not just tuned to detect known spoof types. ... Not all biometric attacks happen in front of the sensor. Replay and injection attacks target the biometric data pipeline itself. In these scenarios, attackers intercept, replay or inject biometric data, such as images or templates, directly into the system, bypassing the sensor entirely. ... Defensive strategies must extend beyond the biometric algorithm. Secure transmission, encryption in transit, device attestation, trusted execution environments and validation that data originates from an authorized sensor are all essential. ... Although less visible to end users, attacks targeting biometric templates and databases can pose long-term risks. If biometric templates are compromised, the impact extends far beyond a single breach.


Open-source security debt grows across commercial software

High and critical risk findings remain widespread. Most codebases contain at least one high risk vulnerability, and nearly half contain at least one critical risk issue. Those rates dipped slightly from the prior year even as total vulnerability counts rose. Supply chain attacks add another layer of risk. Sixty five percent of surveyed organizations experienced a software supply chain attack in the past year. ... “As AI reshapes software development, security teams will have to continue to adapt in turn. Security budgets and security guidelines should reflect this new reality. Leaders should continue to invest in tooling and education required to equip teams to manage the drastic increase in velocity, volume, and complexity of applications,” Mackey said. Board level reporting also requires adjustment as vulnerability volumes rise. ... Outdated components appear in nearly every audited environment. More than nine in ten codebases contain components that are several years out of date or show no recent development activity. A large share of components run many versions behind current releases. Only a small fraction operate on the latest available version. This maintenance debt intersects with regulatory obligations. The EU Cyber Resilience Act entered into effect in late 2024, with key reporting requirements taking effect in 2026 and broader enforcement following in 2027. 


The agentic enterprise: Why value streams and capability maps are your new governance control plane

The enterprise is currently undergoing a seismic pivot from generative AI, which focuses on content creation, to agentic AI, which focuses on goal execution. Unlike their predecessors, these agents possess “structured autonomy”: the ability to perceive contexts, plan actions and execute across systems without constant human intervention. For the CIO and the enterprise architect, this is not merely an upgrade in automation speed; it is a fundamental shift in the firm’s economic equation. We are moving from labor-centric workflows to digital labor capable of disassembling and reassembling entire value chains. ... In an agentic enterprise, the value stream map is no longer just a diagram; it is the control plane. It must explicitly define the handoff protocols between human and digital agents. In my opinion, Value stream maps must move from static documents stored in a repository to context documents used to drive agentic automation. ... If a value stream does not exist, you cannot automate it. For new agentic workflows, do not map the current human process. Instead, use an outcome-backwards approach. Work backward from the concrete deliverable (e.g., customer onboarded) to identify the minimum viable API calls required. Before granting write access, run the new agentic stream in shadow mode to validate agent decisions against human outcomes.


Beyond compliance: Building a culture of data security in the digital enterprise

Cyber compliance is something organisations across industrial sectors take seriously, especially with new regulations getting introduced and non-compliance having consequences such as hefty penalties. Hence, businesses are placing compliance among their top priorities. However, hyper-focusing only on compliance can lead to tunnel vision, crippling creativity, and innovation. It fails to offer a comprehensive risk assessment due to the checklist approach it follows, exposing organizations to vulnerabilities and fast-evolving threats. Having a compliance-first mindset can lead to incomplete risk assessment, creating blind spots and security gaps in security provisions. ... With businesses relying on data for operations, customer engagement, and decision-making, ensuring data security protects both users and organisations. Data breaches have severe consequences, including financial losses, reputational damage, customer churn, and regulatory penalties. With data moving across on-premises data centers, cloud platforms, third-party ecosystems, remote work environments, and AI-driven applications, there is a need for a holistic, culture-driven approach to cybersecurity. ... Data protection traditionally was focused on safeguarding the perimeter by securing networks and systems within the physical boundaries where data was normally stored. 


If you thought RTO battles were bad, wait until AI mandates start taking hold across the industry

With the advent of generative AI and the incessant beating of the drum by executives hellbent on unlocking productivity gains, we could see a revival of the dreaded workforce mandate –- only this time with AI. We’ve already had a glimpse of the same RTO tactics being used with AI over the last year. In mid-2025, Microsoft introduced new rules aimed at boosting AI use across the company, with an internal memo warning staff that “using AI is no longer optional”. ... As with RTO mandates, we’re now reaching a point where upward mobility within the enterprise could be at risk as a result of AI use. It’s a tactic initially touted by Dell in 2024 when enforcing its own hybrid work rules, which prompted a fierce backlash among staff. Forcing workers to use AI or risk losing out on promotions will have the desired effect executives want, namely that employees will use the technology, but that’s missing the point entirely. AI has been framed by many big tech providers as a prime opportunity to supercharge productivity and streamline enterprise efficiency. We’ve all heard the marketing jargon. If business leaders are at the point where they’re forcing staff to use the technology, it begs the question of whether it’s actually having the desired effect, which recent analysis suggests it’s not. ... Recent analysis from CompTIA found roughly one-third of companies now require staff to complete AI training. 


In perfect harmony: How Emerald AI is turning data centers into flexible grid assets

At the core of Emerald AI is its Emerald Conductor platform. Described by Sivaram as “an AI for AI,” the system orchestrates thousands of AI workloads across one or more data centers, dynamically adjusting operations to respond to grid conditions while ensuring the facility maintains performance. The system achieves this through a closed-loop orchestration platform comprising an autonomous agent and a digital twin simulator. ... A point keenly pointed out by Steve Smith, chief strategy and regulation officer at National Grid, at the time of the announcement: “As the UK’s digital economy grows, unlocking new ways to flexibly manage energy use is essential for connecting more data centers to our network efficiently.” The second reason was National Grid's transatlantic stature - as an American company active in both the UK and US markets - and its commitment to the technology. “They’ve invested in the program and agreed to a demo, which makes them the ideal partner for our first international launch,” says Sivaram. The final, and most important, factor, notes Sivaram, was the access to the NextGrid Alliance, a consortium of 150 utilities worldwide. By gaining access to such a robust partner network, the deal could serve as a springboard for further international projects. This aligns with the company’s broader partnership approach. Emerald AI has already leveraged Nvidia’s cloud partner network to test its technology across US data centers, laying the groundwork for broader deployment and continued global collaboration. 


7 ways to tame multicloud chaos with generative AI

Architects have the difficult job of understanding tradeoffs between proprietary cloud services and cross-cloud platforms. For example, should developers use AWS Glue, Azure Data Factory, or Google Cloud Data Fusion to develop data pipelines on the respective platforms, or should they adopt a data integration platform that works across clouds? ... “Managing multicloud is like learning multiple languages from AWS, Azure, Oracle, and others, and it’s rare to have teams that can traverse these environments fluidly and effectively. Plus, services and concepts are not portable among clouds, especially in cloud-native PaaS services that go beyond IaaS,” says Harshit Omar, co-founder and CTO at FluidCloud. One way to work around this issue is to assign an AI agent to support the developer or architect in evaluating platform selections. ... Standardizing infrastructure and service configurations across different clouds requires expertise in different naming conventions, architecture, tools, APIs, and other paradigms. Look for genAI tools to act as a translator to streamline configurations, especially for organizations that can templatize their requirements. ... CI/CD, infrastructure-as-code, and process automation are key tools for driving efficiency, especially when tasks span multiple cloud environments. Many of these tools use basic flows and rules to streamline tasks or orchestrate operations, which can create boundary cases that cause process-blocking errors. 


It’s Time To Reinforce Institutional Crypto Key Management With MPC: Sodot CEO

For years, crypto security operations were almost exclusively focused on finding a way to protect the private keys to crypto wallets. It’s known as the “custody risk,” and it will always be a concern to anyone holding digital assets. However, Sofer believes that custody is no longer the weakest link. Cyberattackers have come to realize that secure wallets, often held in cold storage, are far too difficult to crack. ... Sodot has built a self-hosted infrastructure platform that leverages a pair of cutting-edge security techniques – namely, Multi-Party Computation or MPC and Trusted Execution Environments or TEEs. With Sodot’s platform, API keys are never reassembled in full plaintext, eliminating one of the main weaknesses of traditional secrets managers, which typically expose the entire key to any authenticated machine. Instead, Sodot uses MPC to split each key into multiple “shares” that are held by different partners on different technology stacks, Sofer explained. Distributing risk in this way makes an attacker’s job exponentially more difficult, as it means they would have to compromise multiple isolated systems to gain access. ... “Keys are here to stay, and they will control more value and become more sensitive as technology progresses,” Sofer concluded. “As financial institutions get more involved in crypto, we believe demand for self-hosted solutions that secure them will only grow, driven by performance requirements, operational resilience, and control over security boundaries.”

Daily Tech Digest - June 06, 2025


Quote for the day:

"Next generation leaders are those who would rather challenge what needs to change and pay the price than remain silent and die on the inside." -- Andy Stanley


The intersection of identity security and data privacy laws for a safe digital space

The integration of identity security with data privacy has become essential for corporations, governing bodies, and policymakers. Compliance regulations are set by frameworks such as the Digital Personal Data Protection (DPDP) Bill and the CERT-In directives – but encryption and access control alone are no longer enough. AI-driven identity security tools flag access combinations before they become gateways to fraud, monitor behavior anomalies in real-time, and offer deep, contextual visibility into both human and machine identities. All these factors combined bring about compliance-free, trust-building resilient security: proactive security that is self-adjusting, overcoming various challenges encountered today. By aligning intelligent identity security tools with privacy regulations, organisations gain more than just protection—they earn credibility. ... The DPDP Act tracks closely to global benchmarks such as GDPR and data protection regulations in Singapore and Australia which mandate organisations to implement appropriate security measures to protect personal data and amp up response to data breaches. They also assert that organisations that embrace and prioritise data privacy and identity security stand to gain the optimum level of reduced risk and enhanced trust from customers, partners and regulators.


Who needs real things when everything can be a hologram?

Meta founder and CEO Mark Zuckerberg said recently on Theo Von’s “This Past Weekend” podcast that everything is shifting to holograms. A hologram is a three-dimensional image that represents an object in a way that allows it to be viewed from different angles, creating the illusion of depth. Zuckerberg predicts that most of our physical objects will become obsolete and replaced by holographic versions seen through augmented reality (AR) glasses. The conversation floated the idea that books, board games, ping-pong tables, and even smartphones could all be virtualized, replacing the physical, real-world versions. Zuckerberg also expects that somewhere between one and two billion people could replace their smartphones with AR glasses within four years. One potential problem with that prediction: the public has to want to replace physical objects with holographic versions. So far, Apple’s experience with Apple Vision Pro does not imply that the public is clamoring for holographic replacements. ... I have no doubt that holograms will increasingly become ubiquitous in our lives. But I doubt that a majority will ever prefer a holographic virtual book over a physical book or even a physical e-book reader. The same goes for other objects in our lives. I also suspect both Zuckerberg’s motives and his predictive powers.


How AI Is Rewriting the CIO’s Workforce Strategy

With the mystique fading, enterprises are replacing large prompt-engineering teams with AI platform engineers, MLOps architects, and cross-trained analysts. A prompt engineer in 2023 often becomes a context architect by 2025; data scientists evolve into AI integrators; business-intelligence analysts transition into AI interaction designers; and DevOps engineers step up as MLOps platform leads. The cultural shift matters as much as the job titles. AI work is no longer about one-off magic, it is about building reliable infrastructure. CIOs generally face three choices. One is to spend on systems that make prompts reproducible and maintainable, such as RAG pipelines or proprietary context platforms. Another is to cut excessive spending on niche roles now being absorbed by automation. The third is to reskill internal talent, transforming today’s prompt writers into tomorrow’s systems thinkers who understand context flows, memory management, and AI security. A skilled prompt engineer today can become an exceptional context architect tomorrow, provided the organization invests in training. ... Prompt engineering isn’t dead, but its peak as a standalone role may already be behind us. The smartest organizations are shifting to systems that abstract prompt complexity and scale their AI capability without becoming dependent on a single human’s creativity.


Biometric privacy on trial: The constitutional stakes in United States v. Brown

The divergence between the two federal circuit courts has created a classic “circuit split,” a situation that almost inevitably calls for resolution by the U.S. Supreme Court. Legal scholars point out that this split could not be more consequential, as it directly affects how courts across the country treat compelled access to devices that contain vast troves of personal, private, and potentially incriminating information. What’s at stake in the Brown decision goes far beyond criminal law. In the digital age, smartphones are extensions of the self, containing everything from personal messages and photos to financial records, location data, and even health information. Unlocking one’s device may reveal more than a house search could have in the 18th century, and the very kind of search the Bill of Rights was designed to restrict. If the D.C. Circuit’s reasoning prevails, biometric security methods like Apple’s Face ID, Samsung’s iris scans, and various fingerprint unlock systems could receive constitutional protection when used to lock private data. That, in turn, could significantly limit law enforcement’s ability to compel access to devices without a warrant or consent. Moreover, such a ruling would align biometric authentication with established protections for passcodes. 


GenAI controls and ZTNA architecture set SSE vendors apart

“[SSE] provides a range of security capabilities, including adaptive access based on identity and context, malware protection, data security, and threat prevention, as well as the associated analytics and visibility,” Gartner writes. “It enables more direct connectivity for hybrid users by reducing latency and providing the potential for improved user experience.” Must-haves include advanced data protection capabilities – such as unified data leak protection (DLP), content-aware encryption, and label-based controls – that enable enterprises to enforce consistent data security policies across web, cloud, and private applications. Securing Software-as-a-Service (SaaS) applications is another important area, according to Gartner. SaaS security posture management (SSPM) and deep API integrations provide real-time visibility into SaaS app usage, configurations, and user behaviors, which Gartner says can help security teams remediate risks before they become incidents. Gartner defines SSPM as a category of tools that continuously assess and manage the security posture of SaaS apps. ... Other necessary capabilities for a complete SSE solution include digital experience monitoring (DEM) and AI-driven automation and coaching, according to Gartner. 


5 Risk Management Lessons OT Cybersecurity Leaders Can’t Afford to Ignore

A weak or shared passwords, outdated software, and misconfigured networks are consistently leveraged by malicious actors. Seemingly minor oversights can create significant gaps in an organization’s defenses, allowing attackers to gain unauthorized access and cause havoc. When the basics break down, particularly in converged IT/OT environments where attackers only need one foothold, consequences escalate fast. ... One common misconception in critical infrastructure is that OT systems are safe unless directly targeted. However, the reality is far more nuanced. Many incidents impacting OT environments originate as seemingly innocuous IT intrusions. Attackers enter through an overlooked endpoint or compromised credential in the enterprise network and then move laterally into the OT environment through weak segmentation or misconfigured gateways. This pattern has repeatedly emerged in the pipeline sector. ... Time and again, post-mortems reveal the same pattern: organizations lacking in tested procedures, clear roles, or real-world readiness. A proactive posture begins with rigorous risk assessments, threat modeling, and vulnerability scanning—not once, but as a cycle that evolves with the threat landscape. This plan should outline clear procedures for detecting, containing, and recovering from cyber incidents. 


You Can Build Authentication In-House, But Should You?

Auth isn’t a static feature. It evolves — layer by layer — as your product grows, your user base diversifies, and enterprise customers introduce new requirements. Over time, the simple system you started with is forced to stretch well beyond its original architecture. Every engineering team that builds auth internally will encounter key inflection points — moments when the complexity, security risk, and maintenance burden begin to outweigh the benefits of control. ... Once you’re selling into larger businesses, SSO becomes a hard requirement for enterprises. Customers want to integrate with their own identity providers like Okta, Microsoft Entra, or Google Workspace using protocols like SAML or OIDC. Implementing these protocols is non-trivial, especially when each customer has their own quirks and expectations around onboarding, metadata exchange, and user mapping. ... Once SSO is in place, the following enterprise requirement is often SCIM (System for Cross-domain Identity Management). SCIM, also known as Directory Sync, enables organizations to provision automatically and deprovision user accounts through their identity provider. Supporting it properly means syncing state between your system and theirs and handling partial failures gracefully. ... The newest wave of complexity in modern authentication comes from AI agents and LLM-powered applications. 


Developer Joy: A Better Way to Boost Developer Productivity

Play isn’t just fluff; it’s a tool. Whether it’s trying something new in a codebase, hacking together a prototype, or taking a break to let the brain wander, joy helps developers learn faster, solve problems more creatively, and stay engaged. ... Aim to reduce friction and toil, the little frustrations that break momentum and make work feel like a slog. Long build and test times are common culprits. At Gradle, the team is particularly interested in improving the reliability of tests by giving developers the right tools to understand intermittent failures. ... When we’re stuck on a problem, we’ll often bang our head against the code until midnight, without getting anywhere. Then in the morning, suddenly it takes five minutes for the solution to click into place. A good night’s sleep is the best debugging tool, but why? What happens? This is the default mode network at work. The default mode network is a set of connections in your brain that activates when you’re truly idle. This network is responsible for many vital brain functions, including creativity and complex problem-solving. Instead of filling every spare moment with busywork, take proper breaks. Go for a walk. Knit. Garden. "Dead time" in these examples isn't slacking, it’s deep problem-solving in disguise.


Get out of the audit committee: Why CISOs need dedicated board time

The problem is the limited time allocated to CISOs in audit committee meetings is not sufficient for comprehensive cybersecurity discussions. Increasingly, more time is needed for conversations around managing the complex risk landscape. In previous CISO roles, Gerchow had a similar cadence, with quarterly time with the security committee and quarterly time with the board. He also had closed door sessions with only board members. “Anyone who’s an employee of the company, even the CEO, has to drop off the call or leave the room, so it’s just you with the board or the director of the board,” he tells CSO. He found these particularly important for enabling frank conversations, which might centre on budget, roadblocks to new security implementations or whether he and his team are getting enough time to implement security programs. “They may ask: ‘How are things really going? Are you getting the support you need?’ It’s a transparent conversation without the other executives of the company being present.” ... In previous CISO roles, Gerchow had a similar cadence, with quarterly time with the security committee and quarterly time with the board. He also had closed door sessions with only board members. “Anyone who’s an employee of the company, even the CEO, has to drop off the call or leave the room, so it’s just you with the board or the director of the board,” he tells CSO.


Mind the Gap: AI-Driven Data and Analytics Disruption

The Holy Grail of metadata collection is extracting meaning from program code: data structures and entities, data elements, functionality, and lineage. For me, this is one of the most potentially interesting and impactful applications of AI to information management. I’ve tried it, and it works. I loaded an old C program that had no comments but reasonably descriptive variable names into ChatGPT, and it figured out what the program was doing, the purpose of each function, and gave a description for each variable. Eventually this capability will be used like other code analysis tools currently used by development teams as part of the CI/CD pipeline. Run one set of tools to look for code defects. Run another to extract and curate metadata. Someone will still have to review the results, but this gets us a long way there. ... Large language models can be applied in analytics a couple different ways. The first is to generate the answer solely from the LLM. Start by ingesting your corporate information into the LLM as context. Then, ask it a question directly and it will generate an answer. Hopefully the correct answer. But would you trust the answer? Associative memories are not the most reliable for database-style lookups. Imagine ingesting all of the company’s transactions then asking for the total net revenue for a particular customer. Why would you do that? Just use a database. 

Daily Tech Digest - September 10, 2021

AI as a service to solve your business problems? Guess again

Companies seeking to use AI as a differentiating technology in order to gain business advantages — and not merely doing it because that’s what everyone else is doing — require planning and strategy, and that almost always means a customized solution. In the words of Sepp Hochreiter (inventor of LSTM, one of the world’s most famous and successful AI algorithms), “the ideal combination for the best time to market and lowest risk for your AI projects is to slowly build a team and use external proven experts as well. No one can hire the best talent quickly, and even worse, you cannot even judge the quality during hiring but will only find out years later.” That’s a far cry from what most online off-the-shelf AI services offer today. The artificial intelligence technology offered by AIaaS comes in two flavors — and the predominant one is a very basic AI system that claims to provide a “one-size-fits-all” solution for all businesses. Modules offered by AI service providers are meant to be applied, as-is, to anything from organizing a stockroom to optimizing a customer database to preventing anomalies in production of a multitude of products.


Let’s Redefine “Productivity” for the Hybrid Era

Despite the burnout so many of us feel, the hybrid environment offers an opportunity to create a more sustainable approach to work. Remote and in-person work both have distinct advantages and disadvantages, and rather than expecting the same outcomes from each, we can build on what makes them unique. When in the office, prioritize relationships and collaborative work like brainstorming around a whiteboard. When working from home, encourage people to design their days to include other priorities such as family, fitness, or hobbies. They should take a nap if they need one and step outside between meetings. Brain studies show that even five-minute breaks between remote meetings help people think more clearly and reduce stress. Likewise, watch out for the risks each type of work carries with it. People can avoid the long commutes they used to have by staggering their schedules to avoid traffic. Encourage them to set boundaries at home so they don’t work every hour of the day just because they can. The trick is finding what works for each individual. 


DevOps Is Not Automation

A highly-evolved DevOps team isn’t just about automating processes; it’s about eliminating production roadblocks. Automating processes without making changes to how your teams communicate is just moving the roadblocks around. A key first step to truly effective DevOps is to synchronize development and operations teams--teams that in traditional tech culture are siloed--and in fact, often at odds. Forte Group points out that that typically, development teams are incentivized to push things forward (get their deliverables in on time) and quality assurance teams and system administrators are incentivized to minimize disruptions (which often means pushing back deadlines to focus on a quality product). In order to create a culture where continuous development is possible, these teams have to think of their work as sharing an objective. Additionally, they need to communicate frequently and effectively. DevOps also requires a shift from one big deliverable at the end of a long development period to small, incremental deployments that happen regularly and are constantly being monitored and adjusted.


Who Should Own The Job Of Observability In DevOps?

Observability helps answer any question. Of course, this applies to troubleshooting as well as helping users address the unknowns inside today’s complex business systems. With observability, companies can continuously monitor and react to issues or faults. Although observability may seem like the new buzzword in IT, it actually isn’t new at all. The term came about as part of the evolution of monitoring. As organizations began to move toward the cloud and microservice applications, they needed a strategy that enabled them to monitor at scale, along with answering the questions that were not defined during the implementation of the monitoring system. Observability improves the way we collect data and provides the data necessary to drive digital businesses forward. ... Great monitoring tools count for little if people don’t know how to use them properly. Organizations can have too many tools, owned by different teams, so there’s a challenge around the selection and ownership of specific tools within an organization. Organizations must be sure to take the necessary steps of clearly communicating to developers their roles and responsibilities and options available for them to solve the observability challenge. 


Tooling Network Detection & Response for Ransomware

If ransomware is given too much time on the network, even if it doesn’t gain access to your most critical data, it could have an impact on day-to-day operations. By tracking the ransomware’s lateral movement, organizations can see where it moved, and, more importantly, which machines were infected. Doing so reduces the number of machines infected and thus reduces the time to recovery. Tracking lateral movement is only as good as the data being collected. When new machines or new employees connect to the network, organizations should start monitoring those connections right away. Doing so will provide the most visibility and will enable the organization to track malicious movement from all devices on the network. Additionally, understanding how malicious software is connecting throughout your network requires having an NDR system capable of collecting network flow data and analyzing it. By leveraging flow data, organizations can quickly determine where ransomware—and other malware—are moving across the network. 


Why do humans learn so much faster than machine learning models?

Strides have been made in enabling ML models to mimic the kind of understanding humans have. A great and frankly magical example are word embeddings. ... Word embeddings are a way to represent text data as numbers, needed if you want to feed the text into an ML model. Word embeddings represent each word using say 50 features. Words that are close together in this 50 dimensional space are similar in meaning, for example apple and orange. The challenge we face is how to construct these 50 features. Multiple approaches have been proposed, but in this article we focus on Glove word embeddings. Glove word embeddings are derived from a co-occurence matrix of the words in a corpus. If words occur in the same textual context, Glove assumes they are similar in meaning. This already presents the first hint that word embeddings learn an understanding of the corpus they train on. If in a given context a lot of fruits are used, the word embeddings will know apple would fit in that place.


Observability is key to the future of software (and your DevOps career)

Observability platforms enable you to easily figure out what’s happening with every request and to identify the cause of issues fast. Learning the principles of observability and OpenTelemetry will set you apart from the crowd and provide you with a skill set that will be in increasing demand as more companies perform cloud migrations. From an end-user perspective, “telemetry” can be a scary-sounding word, but in observability, telemetry describes its three primary pillars of data: metrics, traces, and logs. This data from your applications and infrastructure is called ‘telemetry,’ and it’s the foundation of any monitoring or observability system. OpenTelemetry is an industry standard for instrumenting applications to provide this telemetry, collecting it across the infrastructure and emitting it to an observability system. ... As an engineer, the best way to get started with something is to get your hands dirty. As someone who works for a commercial observability vendor, I’d be remiss to not tell you to try a free trial of Splunk Observability Cloud—there’s no credit card required and the integration wizards that walk you through setup actually have you integrate your architecture with OpenTelemetry.


Service Mesh Ultimate Guide - Second Edition: Next Generation Microservices Development

Broadly speaking, the data plane “does the work” and is responsible for “conditionally translating, forwarding, and observing every network packet that flows to and from a [network endpoint].” In modern systems, the data plane is typically implemented as a proxy, (such as Envoy, HAProxy, or MOSN), which is run out-of-process alongside each service as a “sidecar.” Linkerd uses a micro-proxy approach that’s optimized for the service mesh sidecar use cases. A control plane “supervises the work,” and takes all the individual instances of the data plane—a set of isolated stateless sidecar proxies—and turns them into a distributed system. The control plane doesn’t touch any packets/requests in the system, but instead, it allows a human operator to provide policy and configuration for all of the running data planes in the mesh. The control plane also enables the data plane telemetry to be collected and centralized, ready for consumption by an operator. 


‘Azurescape’ Kubernetes Attack Allows Cross-Container Cloud Compromise

In the multitenant architecture, each customer’s container is hosted in a Kubernetes pod on a dedicated, single-tenant node virtual machine (VM), according to the analysis, and the boundaries between customers are enforced by this node-per-tenant structure. “Since practically anyone can deploy a container to the platform, ACI must ensure that malicious containers cannot disrupt, leak information, execute code or otherwise affect other customers’ containers,” explained researchers. “These are often called cross-account or cross-tenant attacks.” The Azurescape version of such an attack has two prongs: First, malicious Azure customers/adversaries must escape their container; then, they must acquire a privileged Kubernetes service account token that can be used to take over the Kubernetes API server. The API Server provides the frontend for a cluster’s shared state, through which all of the nodes interact, and it’s responsible for processing commands within each node by interacting with Kubelets. Each node has its own Kubelet, which is the primary “node agent” that handles all tasks for that specific node.


The impact of ransomware on cyber insurance driving the need for broader cybersecurity knowledge

Effective security operations are critical to minimizing both the likelihood and the impact of a cyberattack. Disparate tools will not fix the effectiveness problem facing organizations across the globe, nor will they stand up to risk assessments and external insurer requirements. An effective security operations strategy provides risk management leaders the foundation to confidently negotiate with insurance providers and set a long-term cybersecurity agenda that protects the entire business. For insurance providers, there is an opportunity to partner with security operations experts to expand their cybersecurity expertise, to allow for more precise, accurate calculations for policyholders. Cyber insurers and security operations professionals must break down silos and recognize that together, they have a unique opportunity to coordinate effectively to better protect businesses. ... It’s paramount that insurance providers expand their knowledge on cybersecurity. The providers that do will be able to take full control over their policies. 



Quote for the day:

“It is more productive to convert an opportunity into results than to solve a problem – which only restores the equilibrium of yesterday.” -- Peter Drucker

May 14, 2016

Q&A with Shawn Callahan on Putting Stories to Work

The first thing you need to do to develop your storytelling skills is to find some stories, preferably about things that have happened to you. Then you must work out the lesson or insight that is contained in a story, share the story, and see what happens. Here are two tips that will help enormously. First, never use the word ‘story’ when you share your story. Don’t start by saying, ‘Hey guys, I want to share a story with you …’ Instead, start with the insight that is contained in the story. For example, your story might be about persistence, about just how important it is to stick with something. So you might start by saying, ‘You know what, a lot of success comes from persistence. A few years ago …’ And away you go. People will listen intently because they want to know the insight that’s based on your experience.


The UK builds a 'fintech bridge' to Singapore

The co-operation agreement enables the UK regulator to refer fintech firms to its counterpart, and vice versa, making it easier for fintechs to scale between countries. Both countries want to be global fintech hubs amidst growing competition from the US and China. A booming fintech industry is desirable for two reasons: it helps the national economy, and it promotes competition and growth in the financial services industry. But while both Singapore and the UK boast advantages for fintechs, they are relatively small markets — the UK has under 70 million people, while Singapore has around 6 million. The partnership will create opportunities for fintechs to scale beyond the countries' borders, making it easier for startups that choose to launch in these countries to attract investment.


Culture and Technology Can Drive the Future of Openstack

“OpenStack in the future is whatever we expand it to,” said Red Hat Chief Technologist, Chris Wright during his keynote at the OpenStack Summit in Austin. After watching several keynotes, including those from Gartner and AT&T, I attended other sessions during the course of the day culminating in a session by Lauren E Nelson, Senior Analyst at Forrester Research. Wright’s statement made me wonder about what lies in store for OpenStack and where the OpenStack Community—the “we” that Wright referred to—would take it in the future. Several sessions in the Analyst track called out the factors that explain the increased adoption of OpenStack as well as the technological challenges encountered.


15 Google Doc Features You Didn't Know Existed

While the capability to edit and make changes in a document is great, there are times when you only want to suggest changes -- without actually making any. That's where "Suggesting" mode in Google Docs comes in handy. It works a lot like Comments in Microsoft Word. First, switch from "Editing" mode to "Suggesting" mode by clicking the pencil icon at the top right of an open document, and then choosing "Suggesting." ... Want to comment on a document and get a specific person's attention? You can do that by tagging them in your comment. All you have to do is add an @ or a + sign, and then begin typing their name or email address. Google Docs will give you a couple options based on your Gmail contacts, and once you've submitted the comment, it'll notify that person you mentioned by sending them an email.


Blockchain technology will revolutionize the world, enthusiasts say

Blockchain could disrupt transactions the way the internet did for communication. Any information that can be encrypted and stored in digital form can be transmitted — everything from real estate deals to medical records to transferring concert tickets. Blockchain is a “distributed ledger” invented by the mysterious person or group known as Santoshi Nakamoto that is accessible by everyone, but controlled by no one. It’s searchable and public making it more traceable than cash but encrypted and anonymous to maintain privacy. Picture it as a communal record-keeping system — the kind small communities kept in the 16th century to keep track of births, marriages, property transfers, anything of importance—but on a massive global scale. Blockchain is seen as the next great disintermediation.


10 Ways Virtual Reality is Disrupting Industries

Most of all virtual reality are helping teachers bridge the gap between what’s taught in the classrooms and what’s out there in the real world. Putting it into practice recently, British Museum partnered with Samsung and hosted a Virtual Reality Weekend. Families got a chance to view the museum antics using Samsung Gear VR. In fact, children above 13 were given a VR tour of the Bronze Age where they could experience a 3D depiction of life as it was back then. While this is just the beginning, Google seems to be planning for a Magic School Bus experience with its Expeditions Pioneer Program. Expeditions is a virtual reality platform which allows teachers to take kids on virtual field trips to places where buses can’t go. The program currently has more than 100 VR panoramas including those of Coral Reefs and US Financial Centers.


Going Through the Scrum Motions as Opposed to Being an Agile Jedi

Doing Scrum and not being Agile is more challenging to discern. It occurs in organizations adopting Scrum as their preferred Agile approach. The astute observer will notice team behavioral patterns that suggest mechanical adoption rather than assimilation. The psychological pattern is that of introjection – similar to chewing on a mouthful of dry biscuits not being able to swallow. Similar to other managerial process, it is easy to adopt the Scrum ceremonies rather than their intent. We have seen it occur previously with Six Sigma, Total Quality Control, and other managerial processes. Achieving the intent requires a cultural change; cultural change requires organizational change; organizational change requires buy in from key stakeholders which in turn requires people championing the new process across the organization.


Road to Efficiency, Part 1

The responsibility for resiliency and access may move to the cloud solution provider, but if data is deleted (inadvertently or intentionally) or corrupted on a logical level (and we know applications never corrupt data, don’t we?), it doesn’t matter on which infrastructure it runs. Furthermore, most businesses typically require more than just the most recent point in time copy of data. Finally, remember that these requirements apply equally to IaaS, PaaS, and SaaS solutions. ... In the end, we need to enhance the value of the data itself. One way is by providing insight into all data, regardless of whether it resides on-premises or in the cloud, on primary storage or as part of data protection solution. Once we can gather and identify all data, the key is unlocking its value. Global search, hold and discovery are just some of the initial use-cases.


Security in a hybrid world: You can’t protect what you can’t see

There are two parts to enforcing the new normal; bringing your entire estate into compliance, and enforcing the use of this new baseline. Once you have determine a need for change: patching, configuration files, applications, you name it, you need to act quickly and across your entire environment. Automation is faster, less error prone, and helps you reliably perform required actions across your entire estate. No matter how good you and your team are, and no matter how good your tools are, someone will always try to run older unpatched code. And someone will, if you don’t have the automated policies in place to confirm and approve code execution based on software versions, configuration file settings, registry settings, etc. One easy way to limit your exposure is to scan snapshots and live VMs for policy compliance.


Snowden interview: Why the media isn’t doing its job

A lot of people laud me as the sole actor, like I’m this amazing figure who did this. I personally see myself as having a quite minor role. I was the mechanism of revelation for a very narrow topic of governments. It’s not really about surveillance, it’s about what the public understands—how much control the public has over the programs and policies of its governments. If we don’t know what our government really does, if we don’t know the powers that authorities are claiming for themselves, or arrogating to themselves, in secret, we can’t really be said to be holding the leash of government at all. One of the things that’s really missed is the fact that as valuable and important as the reporting that came out of the primary archive of material has been, there’s an extraordinarily large, and also very valuable amount of disclosure that was actually forced from the government, because they were so back-footed by the aggressive nature of the reporting.



Quote for the day:


"If everyone has to think outside the box, maybe it is the box that needs fixing." -- Malcolm Gladwell


November 15, 2015

Sick of apps, mice and menus? Finland's Solu might have an answer

It involves an all-you-can-eat software subscription and near unlimited storage. It is also blending Mozilla's HTML5-led approach with Firefox OS -- which it hoped to draw web developers to its open mobile platform -- and the path trodden by BlackBerry and Jolla, which developed their own OSes but equipped them to run Android apps. "We have a revenue-sharing model for our subscription a bit like Spotify, which is whenever people use your applications we pass on that subscription to the developers. This means that developers get money even by accident -- for example by users collaborating," Lawson said.


A Lifelong Learner’s Path from Library Science to Data Science

Kurt recognized that the entry level to math and software “is really boring, really difficult, and doesn’t have a lot of reward,” but he believes that: “one of the most dangerous things is the ingrained sense that I don’t know math, and can’t. The irony is when you look at the upper level advanced stuff in math, the skills needed are related to creative skills. The danger then becomes that the barriers of the tech industry keep the wrong people out. Poking at what’s happening and trying to find something more interesting about it fits both the mathematical and the creative mindset.”


OPNFV Won’t Be a Product

As to why OPNFV was even formed when there were already so many open source groups, Sen said NFV “brings stuff from all the other open source efforts. When we started, we thought we were building one platform, but it’s really a framework where you have a lot of choices.” Chris Wright, chief technologist with Red Hat, said network users’ expectations are being set by Internet companies where orders are self-service and immediate. “Tomorrow’s network is software-centric, with apps that scale out on-demand,” he said. However, network environments are much more complex than many of these Internet companies that have raised the bar.


CIOs receive low marks on IT reform report card

"This scorecard is not intended to be a juridical, prescriptive exercise. It should not be considered a scarlet letter on the back of a federal agency," says Rep. Gerry Connolly (D-Va.). "It is," he says, "an initial assessment, a point-in-time snapshot, much like the quarterly report card one might get from the university or at a school. The intent isn't to punish or stigmatize -- it is in fact to exhort and urge agencies to seize this opportunity and use the scorecard as a management tool to better guide decision making and investments within the agency." "To me the real measure will be six months or a year from now, did we really move the needle on these things?" says U.S. CIO Tony Scott.


Strong data security is not optional

A truly game-changing ruling in Remijas v. Neiman Marcus has made it easier for consumers to sue companies after breaches involving their personal data. Historically, even when sensitive information such as credit card numbers, birth dates, government ID numbers and medical records have been accessed, it’s been hard for consumers to sue companies over the breach. Companies have typically been able to avoid these lawsuits by invoking a Supreme Court case,Clapper v. Amnesty International. The case, which was about phone records and national security, required a showing of a risk of “imminent” and “concrete” injury in order to have standing to bring suit.


How social media can help employees perform better.

When someone connects with people who share similar perspectives and relationships to their own, those new connections typically don’t offer new insights or alternative viewpoints that person couldn’t have accessed before. The natural tendency to be drawn to people similar to yourself can create an “echo chamber,” which can lead to network structures that are detrimental to individual and organizational performance. Enterprise social networking platforms may be better able to offer features that counteract or overcome of our typical social tendencies. Social networking platforms can provide features that allow users to overcome these inherent limitations in our natural networking tendencies, allowing people to develop and maintain networks more beneficial to an organization’s purpose and to their own performance.


7 Key Risks All Businesses Should Manage (But Often Don’t)

Risk is inherent in doing business. The best way to fail is never to take any risks. But there are two kinds of risks: the kind you take consciously to move your company forward, and the kind that sneak up on you and pounce when you’re not looking. The latter are the kind companies must actively manage to avoid being wiped out. When it comes to managing risks, many companies prepare for natural disasters, fire, or maybe theft prevention (even though many small ones don’t even do that), but I think there are bigger risks companies of all sizes should manage. If you have a plan for what to do in case of physical emergency, you should also plan for ...


Claimed Breakthrough Slays Classic Computing Problem; Encryption Could Be Next

Computer scientists measure the difficulty of a problem by looking at how much the computational resources an algorithm needs to solve it grow as the size of the initial problem is increased. Graph isomorphism is considered extremely difficult because the best known algorithm needs roughly exponentially more resources as the size of the graphs it was working on increases. That algorithm was published in 1983 by Babai with Eugene Luks of the University of Oregon. Babai claims that his new algorithm experiences a much less punishingly steep increase in resources as the graphs it is working on get larger, giving graph isomorphism a major difficulty downgrade.


A New Architecture for Information Systems

Just as we can describe a new system, we can also assume a new architecture. Today’s systems architects deliver a crucial business function—carefully planning the relationships between nodes that include networked devices, software, services, and data in the context of business activities. The artifact from these activities typically takes the form of areference architecture. As such, information architecture activities relate equally to the concepts, contexts, language, and intents that foster UX planning, or architecture, activities that articulate a strategy and roadmap for digital user engagement. As businesses and technologists embrace digital transformation and digital experience as vital strategic paradigms, they must mature their digital initiatives by extending their notion of the system to include the thoughtful consideration of information architecture and customers’ digital experience.


Google Cloud gains security for Docker containers

From a security standpoint, though, it's a challenge to tell whether the containers have any vulnerabilities or if there are issues with how the application is being developed. ... Twistlock's Container Security Suite scans the applications both in image registries and in runtime to detect vulnerabilities present in the Linux distribution, application frameworks, and custom-developed application code. It also has activity monitoring and smart profiling capabilities to detect misconfigurations and malicious activities and to take appropriate action, such as blocking the containers from launching and killing misbehaving containers dynamically. The suite can also apply enterprise access control policies to the container environment.



Quote for the day:


"Be clear about your goal but be flexible about the process of achieving it." -- Brian Tracy


August 28, 2013

Why Banks Are Finally Embracing Cloud Computing
The first use case for cloud computing in banks is application testing and development. It's a natural fit, since thorough testing of applications requires considerable computing resources but often takes just three to six months — so investing in equipment to test on doesn't make sense. In the next phase of cloud adoption for banks, they're starting to use human resources, accounting and operations apps in public clouds.


New SPARC M6 chip runs Oracle software faster
The latest SPARC processor has 12 processor cores, effectively doubling the number of cores than its predecessor, M5, which shipped earlier this year. Each M6 core will be able to run 8 threads simultaneously, giving the chip the ability to run 96 threads simultaneously, said Ali Vahidsafa , senior hardware engineer at Oracle, during a presentation about M6 at the Hot Chips conference in Stanford, California.


MIT Develops 110-core Processor for More Power-efficient Computing
Typically a lot of data migration takes place between cores and cache, and the 110-core chip has replaced the cache with a shared memory pool, which reduces the data transfer channels. The chip is also able to predict data movement trends, which reduces the number of cycles required to transfer and process data. The benefits of power-efficient data transfers could apply to mobile devices and databases, Lis said on the sidelines of the conference.


Has RAID5 stopped working?
If you had a 8 drive array with 2 TB drives with one failure your chance of having a unrecoverable read error would be near 100%. That second unreadable block during a RAID5 recovery is enough to destroy the RAID group and wipe out all the data on it. Not good! Even with a four drive RAID5 - and 2TB drives - you would have around a 40% chance of a rebuild failure. Better, but not good enough.


Static and dynamic testing in the software development life cycle
Securing your system requires different approaches and tools as a function of your phase in the life cycle (see Figure 1). During the design phase, you rely on good, secure design processes and reviews (and possibly some formal methods such as specification or modeling languages). In the development and verification phase, you have code that you can touch and test as well as perfect for automated review and inspection while under execution. In production, you can inspect the application under execution.


5 Simple Ways to Become a Better Leader
"Building a real personal connection with your teammates is vital to developing the shared trust necessary to build a strong culture of accountability and exceptional performance," said Terry 'Starbucker' St. Marie, a leadership writer and consultant. "With that culture in place, the team can achieve a successful business, a happy team and a fulfilled leader."  St. Marie believes that being what he calls a "more human" leader requires positivity, purpose, empathy, compassion, humility and love.


We Win In Scalability & Performance Area Among Top NoSQL Databases Says Couchbase CEO
Database industry is 35 billion dollars today and vast majority of that industry is based on proprietary software. The NoSql technologies, the Operational databases, Analytics support provided by Hadoop and things like that are causing a major-major disruption in the database industry and we think that the winners with all this new technology will all be based on open source.


Adopt Centralised Flow Management to Optimize Network Performance
Such networks are resilient to failures of links and switching nodes—the loop protection and routing protocols reconverge onto a new forwarding topology, and data continues to flow. Congestion bottlenecks can be dealt with reasonably effectively —Quality of Service (QoS) rules prioritize real-time and critical data; selective packet dropping can slow TCP sessions down to a rate appropriate to the current traffic conditions; pause control requests end-points to back off for a while.


How to Enhance the Efficiency of Application Development
Although all output-based metrics have their pros and cons and can be challenging to implement, we believe the best solution to this problem is to combine use cases (UCs)—a method for gathering requirements for application-development projects—with use-case points (UCPs), an output metric that captures the amount of software functionality delivered. For most organizations, this path would involve a two-step transformation journey—first adopting UCs and then UCPs.


With all of this innovating, isn’t it time for some innovation accounting?
The trick with innovation accounting is to measure the stuff that really matters. That means trading in “vanity metrics” for metrics that can drive the business forward, according to Ries. Vanity metrics aren’t wrong, per se, but they don’t provide insight into what might be the next best steps or what might need to be changed in order to meet customer interest or demand. Actionable metrics are more complex and provide an opportunity for comparison.



Quote for the day:

"Never accept the proposition that just because a solution satisfies a problem, that it must be the only solution." -- Raymond E. Feist

April 30, 2013

Hackers target shared Web hosting servers for mass phishing attacks
In this type of attack, once phishers break into a shared Web hosting server, they update its configuration so that phishing pages are displayed from a particular subdirectory of every website hosted on the server, APWG said. A single shared hosting server can host dozens, hundreds or even thousands of websites at a time, the organization said.


How Big Data Is Playing Recruiter for Specialized Workers
Companies use Gild to mine for new candidates and to assess candidates they are already considering. Gild itself uses the technology, which was how the company, desperate for programming talent and unable to match the salaries offered by bigger tech concerns, found this guy named Jade outside of Los Angeles. Its algorithm had determined that he had the highest programming score in Southern California, a total that almost no one achieves.


Servant leadership: A path to high performance
These leaders were servants in the best sense of the word. They were people-centric, valued service to others and believed they had a duty of stewardship. Nearly all were humble and passionate operators who were deeply involved in the details of the business. Most had long tenures in their organizations. They had not forgotten what it was like to be a line employee.


Three Gaps in Employee Productivity and What They Mean for IT
Fewer than 40% of employees are truly effective in the competencies shown to have the greatest impact on enterprise performance – right at the point where executives and managers consistently express the belief that they need at least 20% higher performance from employees to meet business goals. Where is employee productivity falling short, and what can IT and Infrastructure teams do to counter these figures?


The IT Conversation We Should Be Having
A simple summary of the work suggests that CEOs believe that CIOs are not in sync with the new issues CEOs are facing, CIOs do not understand where the business needs to go, and CIOs do not have a strategy, in terms of opportunities to be pursued or challenges to be addressed in support of the business.


IT Manager: An IT dashboard for the iPad
IT Manager is an app that offers IT managers another option for using an iPad as an administration tool for local network or web services. It’s a subscription-based app with a wide selection of network and web services admin tools. The growth of tablets and mobile apps in IT management means 24/7 operations go on, regardless of whether staff are working in a data center cage, a user’s desk, or responding to an outage after hours.


Infosec 2013: managing risk in the supply chain
For IT departments, securing information in the supply chain is one of the biggest challenges they face today. This is because supply chains are composed of various companies, all of which have their own set of security standards, and organisations struggle to communicate their requirements to all of these different parties. One way to approach the problem is to assess the “risk appetite” of your organisation, according to Mark Pearce, Head of Information Security at the Post Office.


How UpStream uses R for Attribution Analysis
Major retailers like Williams Sonoma use UpStream Software for marketing analytics, including revenue attribution, targeting, and optimization. In this video Tess Nesbitt (senior statistician at UpStream) describes how she uses Revolution R Enterprise and Hadoop to figure out the impact on various marketing channels (for example direct mail, email offers, and catalogs) on consumer retail sales.


A Note for the Boss Who Talks Too Much
Play leadership anthropologist in your own organization and chances are you’ll find a good number of these en-titled characters who are compelled to consume every possible molecule of oxygen and every moment of air-time to share their self-defined pearls of wisdom and precious nuggets of managerial and inspirational gold.


Microsoft Updates Cloud Agreement For HIPAA Rules
Cloud service providers are starting to take notice of the new HIPAA security regulations that define them as "business associates" of HIPAA-covered entities such as healthcare providers and health plans. Microsoft has just announced a revised business associate agreement (BAA) for its cloud services that reflects the new HIPAA Omnibus Rule governing data security.



Quote for the day:

"Experience is a hard teacher because she gives the test first, the lesson afterwards" -- Vernon Sanders Law

March 24, 2013

Is IT an Agent of Mass Extinction?
The Maya had a curious habit of not rebuilding their pyramids, but making them bigger and better by building new layers on top of existing pyramids. This is not exactly analogous to what IT has done over various technical generations; IT has not simply added layers of new stuff on top of existing stuff, but has fused the whole lot together into one giant incredibly complex architecture. Loose coupling is a fine architectural principle, but invariably honored in the breach where it really matters.


Big Data Analytics Can Help Banks Stop Cyber Criminals Accessing Secret Data
"It's a bit like 'casing the joint'. If you are a cyber criminal you have to case the joint looking at all the little bits of information that companies expose, trying to find user names or passwords, or the technology that they run so that you can design an attack that will succeed from the outside. So the whole model [of bank security] has gone inside-out."


Adobe and Apple: Allies and rivals through the ages
Apple and Adobe have a long history of both agreement and opposition. They've been closely linked since the early days of desktop publishing, often with complementary product lines and common customers, but they've also often wrestled for the upper hand in their relationship. Among the clearest contrasts in the shifting balance between the two companies are two similar moments nearly a decade apart.


Teleworking requires good information sharing
Taken to its logical conclusion, opposition to teleworking implies that global operating models also don’t work given that the objection to collaborating electronically must apply equally to employees who are in different offices as it does to those who are working from home. ... Whether it is a single employee working remotely or a team operating virtually over the globe, there are five principles that are needed to make them successful. At their core they are about establishing a free flow of information.


Stop Wasting Your Time Solving Problems
Problems shake the confidence of new leaders/managers and make them forget they have what it takes. Instilling confidence in them is more important than solving problems for them. Don’t solve people’s problems give them confidence they can solve them.


Apple buys WiFiSlam, maker of tech for locating phones indoors
Digits notes that Google currently offers indoor mapping in airports, shopping centers, sports stadiums, and other locations. It's not known if WiFiSlam's technology will somehow be incorporated into Apple's Maps app. Apple, of course, tossed Google Maps as the default mapping service in iOS and launched its own mapping app, which, on its debut last September, was lambasted for its shortcomings.


Samsung Looks At Enterprise Users With Galaxy S4 Launch
Samsung Knox is quite similar to BackBerry's Balance system. It allows users to create an encrypted container that stores all the sensitive work related apps and information including e-mail, contacts and documents. All this data is kept separate from the user's personal apps and settings. According to Samsung, this allows secured data to remain intact even if the phone experiences a malware attack.


Queues – the true enemy of flow
In short, queues have a direct economic impact on the business. They increase inventory, stall valuable projects, which increases the risk of loss, delay feedback and impact on motivation and quality. Yet in spite of this, they are rarely tracked or targeted. A company that carefully keeps account of every hour of overtime is quite likely to be blissfully unaware of the cost of delay to a project caused by long queues.


Cloud Service Brokerage is the new Enterprise Architecture
No longer will architects perform such low level solution designs involving ‘nuts and bolts’ components, but they will leverage a portfolio of services available via the broker; and the IT engineer or system integrator is no better off as they will no longer rack and stack, but will merely configure the Cloud services available through enterprise connected models which can be provided by the CSB.


William Schiemann on Reinventing Talent Management
In this AMA podcast, Dr. William Schiemann’s talks about his book Reinventing Talent Management: How to Maximize Performance in the New Marketplace aims to put a tight process and discipline of measurement behind the annual report cliché that “Our people are our most important asset.” At the heart of Dr. Schiemann’s thesis is a provocative new talent management model he terms People Equity.



Quotes for the day:

"Educate and inform the whole mass of the people. They are the only sure reliance for the preservation of our liberty." -- Thomas Jefferson

"Leadership is a potent combination of strategy and character. But if you must be without one, be without the strategy." -- Norman Schwarzkopf