Showing posts with label Middleware. Show all posts
Showing posts with label Middleware. Show all posts

Daily Tech Digest - April 24, 2026


Quote for the day:

"To strongly disagree with someone, and yet engage with them with respect, grace, humility and honesty, is a superpower." -- Vala Afshar


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 31 mins • Perfect for listening on the go.


Data debt: AI’s value killer hidden in plain sight

Data debt has emerged as a critical barrier to artificial intelligence success, acting as a "value killer" for modern enterprises. As CIOs prioritize AI initiatives, many are discovering that years of shortcuts, poor documentation, and outdated data management practices—collectively known as data debt—are causing significant project failures. Unlike traditional business intelligence, AI is uniquely unforgiving; it rapidly exposes deep-seated issues such as siloed information, inconsistent definitions, and missing context. Research suggests that delaying data remediation could lead to a 50% increase in AI failure rates and skyrocketing operational costs by 2027. This debt often accumulates through mergers, acquisitions, and the rapid deployment of fragmented systems without centralized governance. To address this growing threat, organizational leaders must treat data debt as a board-level risk rather than a simple technical glitch. Effective remediation requires more than just better technology; it demands a fundamental shift in organizational discipline and the standardization of core business processes. By establishing a reliable data foundation and rigorous governance, companies can prevent their AI ambitions from being stifled by sustained operational friction. Ultimately, addressing data debt is not just a prerequisite for scaling AI responsibly but a vital investment in long-term institutional stability and competitive advantage.


The Autonomy Problem: Why AI Agents Demand a New Security Playbook

As artificial intelligence transitions from passive chat interfaces to autonomous agents, the cybersecurity landscape faces a fundamental shift that renders traditional defense models insufficient. This evolution, often referred to as the "autonomy problem," stems from agents' ability to execute multi-step objectives, interact with APIs, and modify enterprise data independently without constant human intervention. Unlike standard software, agentic AI introduces dynamic risks such as prompt injection, excessive agency, and "logic hijacking," where an agent might be manipulated into performing unintended high-privilege actions. Consequently, security teams must move beyond static identity management and perimeter defense toward a runtime-centric strategy focused on continuous behavioral validation. A new security playbook for this era emphasizes "least privilege" for AI entities, ensuring agents only possess the temporary permissions necessary for a specific task. Furthermore, implementing robust observability and "Human-in-the-Loop" (HITL) checkpoints is critical for high-stakes decision-making. By treating AI agents as digital employees rather than simple tools, organizations can better manage the expanded attack surface. Ultimately, the goal is to balance the massive operational scale offered by autonomous systems with a governance framework that prioritizes transparency, real-time monitoring, and rigorous sandboxing to prevent self-directed machine speed from becoming a liability.


How indirect prompt injection attacks on AI work - and 6 ways to shut them down

Indirect prompt injection attacks represent a critical security vulnerability for Large Language Models (LLMs) that process external data, such as web content, emails, or documents. Unlike direct injections, where a user intentionally feeds malicious commands to a chatbot, indirect attacks occur when hackers hide instructions within third-party data that the AI is likely to retrieve. When the LLM parses this "poisoned" content, it may unknowingly execute the hidden commands, leading to serious risks like data exfiltration, the spread of phishing links, or unauthorized system overrides. For instance, a malicious website could contain hidden text telling an AI summarizer to ignore its safety protocols and send sensitive user information to a remote server. To mitigate these evolving threats, organizations are adopting multi-layered defense strategies, including rigorous input and output sanitization, human-in-the-loop oversight, and the principle of least privilege for AI agents. Major tech companies like Google, Microsoft, and OpenAI are also utilizing automated red-teaming and specialized machine learning classifiers to detect and block these subtle manipulations. For end-users, staying safe involves limiting the permissions granted to AI tools, treating AI-generated summaries with skepticism, and closely monitoring for any suspicious behavior that suggests the model has been compromised.


Advanced Middleware Architecture For Secure, Auditable, and Reliable Data Exchange Across Systems

The article "Advanced Middleware Architecture For Secure, Auditable, and Reliable Data Exchange Across Systems" by Abhijit Roy introduces a high-performance framework designed to bridge the critical gap between security, auditability, and efficiency in distributed environments. Utilizing a layered architecture built on Python and FastAPI, the proposed system integrates JWT-based stateless authentication with cryptographic integrity checks—such as SHA-256 hashing and HMAC signatures—to ensure non-repudiation and end-to-end traceability. By employing asynchronous message processing and standardized Pydantic data models, the middleware achieves a 100% transaction success rate and supports over 25 concurrent users, significantly outperforming legacy systems. Key results include a throughput of 6.8 messages per second and an average latency of 2.69 ms, with security overhead minimized to just 0.2 ms. This structured workflow facilitates seamless interoperability between heterogeneous platforms, making it highly suitable for mission-critical applications in sectors like healthcare, finance, and industrial IoT. The framework not only enforces consistent data validation and type safety but also enhances compliance efficiency through extensive logging and rapid audit retrieval times. Ultimately, the study demonstrates that robust security and detailed audit trails can be maintained without compromising system performance or scalability in complex multi-cloud or containerized settings.


The Performance Delta: Balancing Transaction And Transformation

Alexandra Zanela’s article exploring "The Performance Delta" emphasizes the critical necessity of balancing transactional and transformational leadership behaviors rather than viewing them as mutually exclusive personality traits. Transactional leadership serves as a vital foundation, providing organizational stability and psychological safety by establishing clear expectations, measurable goals, and contingent rewards. However, while transactions ensure tasks are fulfilled, they rarely inspire innovation. This is where transformational leadership—driven by the "four I’s" of idealized influence, inspirational motivation, intellectual stimulation, and individualized consideration—triggers the "augmentation effect." This effect creates a performance delta where effectiveness is multiplied rather than merely added, fostering employee growth, extra-role effort, and reduced burnout. As artificial intelligence increasingly automates the execution of routine transactional tasks like KPI monitoring and resource allocation, the role of the modern leader is shifting. Leaders are now tasked with designing the transactional frameworks while dedicating their freed capacity to human-centric transformational actions that AI cannot replicate, such as professional coaching and ethical vision-setting. Ultimately, thriving in the modern era requires leaders to master both modes, strategically toggling between them to maximize their team’s collective potential and successfully navigate profound organizational changes.


Digital Twins Could Be the Future of Proactive Cybersecurity

Digital twins are revolutionizing cybersecurity by providing dynamic, high-fidelity virtual replicas of IT, OT, and IoT infrastructures. According to the article, these "cyber sandboxes" enable organizations to transition from reactive defense to proactive, rehearsal-based strategies. By simulating sophisticated threats like ransomware campaigns and zero-day exploits within controlled environments, security teams can identify vulnerabilities and analyze the "blast radius" of potential breaches without risking production systems. The technical integration of AI further enhances these models, contributing to significant operational improvements, such as a 33% reduction in breach detection times and an 80% decrease in mean time to resolution. Beyond threat modeling, digital twins facilitate more effective network management and physical security optimization, allowing for the pre-deployment testing of firewall rules and access controls. This technology supports the "shift-left" and "shift-right" paradigms, ensuring security is embedded throughout the entire system lifecycle. Despite challenges regarding data integrity and implementation costs, the strategic adoption of digital twins—currently explored by 70% of C-suite executives—represents a transformative shift toward organizational resilience. By leveraging these real-time simulations, enterprises can validate security postures and implement targeted mitigation strategies, ultimately staying ahead of increasingly automated and stealthy cyberattackers in a complex digital landscape.


How to Manage Operations in DevOps Using Modern Technology

Managing operations in modern DevOps environments requires shifting from manual, queue-based workflows to a streamlined model focused on automation, visibility, and developer enablement. According to the article, modern operations encompass not just infrastructure and deployments but also security, compliance, and cost visibility. To handle these complexities, teams should prioritize automating repetitive tasks and codifying changes through Infrastructure as Code and policy-as-code tools like Open Policy Agent. These automated guardrails ensure consistency and compliance without hindering development speed. Furthermore, the strategic integration of Artificial Intelligence and AIOps can significantly reduce operational toil by identifying anomalies and grouping alerts, though humans must remain the final decision-makers regarding critical reliability. Observability tools provide deeper insights than traditional monitoring by correlating metrics, logs, and traces to diagnose system health in real-time. Perhaps most crucially, the article advocates for the creation of self-service platforms and internal developer portals, which empower engineers to manage their own services while maintaining strict operational standards. By embedding security into daily workflows and using data-driven metrics to track progress, organizations can transform their operations teams from bottlenecks into enablers of innovation. Ultimately, modern technology simplifies management by fostering a culture where the best path is also the easiest one for teams to follow.


Your Data Strategy Isn’t Ready for 2026’s AI, and Neither Is Anyone Else’s

The article argues that most current data strategies are woefully inadequate for the AI landscape expected by 2026. While organizations are currently fixated on basic Generative AI, they are failing to prepare for the rise of "agentic AI"—autonomous systems that require seamless, real-time data access rather than static reports. The central issue is that legacy architectures were designed primarily for human consumption, featuring siloed structures and slow governance processes that cannot support the high-velocity demands of sophisticated machine learning models. To bridge this gap, companies must prioritize "data liquidity" and shift toward AI-native infrastructures. This transformation requires moving away from traditional dashboards and investing in active metadata management, robust data observability, and automated quality controls. By 2026, the competitive divide will be defined by an organization’s ability to feed autonomous agents with high-fidelity, interconnected information. Consequently, businesses must stop viewing data as a passive asset and start treating it as a dynamic, scalable engine for automated decision-making. Failing to modernize these foundations now will leave enterprises unable to leverage the next generation of intelligence, rendering their current AI initiatives obsolete as the technology evolves into more complex, independent operational systems.


Agentic AI to autonomous enterprises: Are businesses ready to hand over decision-making?

The article by Abhishek Agarwal explores the transformative shift from traditional analytical AI to "agentic" systems, which are capable of planning and executing multi-step operational tasks without constant human intervention. Unlike previous AI iterations that merely provided insights for human review, agentic AI can independently manage complex workflows such as supplier selection, inventory management, and customer support. While the business case for these autonomous enterprises is compelling due to gains in speed, scalability, and consistency, the transition presents significant challenges regarding governance and accountability. Organizations must grapple with who is responsible for errors and whether their existing data infrastructure is mature enough to support reliable, large-scale decision-making. The debate over "human-in-the-loop" oversight remains central, with experts suggesting a domain-specific strategy where autonomy is reserved for well-defined, low-risk areas. Ultimately, the author emphasizes that becoming an autonomous enterprise is a strategic journey rather than a race. Success depends on building robust governance frameworks and ensuring high data quality to avoid accountability crises. Rushing into agentic AI prematurely could jeopardize long-term progress, making a thoughtful, honest assessment of readiness essential for any business aiming to leverage these powerful technologies for a sustainable competitive advantage in the modern digital landscape.


When Elite Cyber Teams Can’t Crack Web Security

The article "When Elite Cyber Teams Can’t Crack Web Security" by Jacob Krell explores the significant disparity between theoretical security credentials and practical defensive capabilities. Drawing from Hack The Box’s 2025 Global Cyber Skills Benchmark, which tested nearly 800 corporate security teams, Krell reveals a troubling reality: only 21.1% of these elite teams successfully identified and mitigated common web vulnerabilities. This performance gap persists across highly regulated sectors like finance and healthcare, suggesting that clean compliance audits and professional certifications often provide a false sense of security. The report highlights a "Certification Paradox," where industry-standard exams prioritize knowledge retention over the applied skills necessary to thwart real-world attacks. Furthermore, the abysmal 18.7% solve rate for secure coding challenges exposes the "Shift Left" movement as largely aspirational, with many organizations automating pipelines without cultivating security competency among developers. To address these systemic failures, Krell argues that businesses must move beyond "security theater" by implementing performance-based validations and continuous hands-on training. Ultimately, true resilience requires embedding security as a core craft within development teams rather than treating it as an external compliance checkbox, as attackers exploit practical skill gaps that tools and credentials alone cannot bridge.

Daily Tech Digest - April 13, 2025


Quote for the day:

"I've learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel." -- Maya Angelou



The True Value Of Open-Source Software Isn’t Cost Savings

Cost savings is an undeniable advantage of open-source software, but I believe that enterprise leaders often overlook other benefits that are even more valuable to the organization. When developers use open-source tools, they join a collaborative global community that is constantly learning from and improving on the technology. They share knowledge, resources and experiences to identify and fix problems and move updates forward more rapidly than they could individually. Adopting open-source software can also be a win-win talent recruitment and retention strategy for your enterprise. Many individual contributors see participating in open-source software communities as a tangible way to build their own profiles as experts in their field—and in the process, they also enhance your company’s reputation as a cool place where tech leaders want to work. However, there’s no such thing as a free meal. Open-source software isn't immune to vendor lock-in, when your company becomes so dependent on a partner’s product that it is prohibitively costly or difficult to switch to an alternative. You may not be paying licensing fees, but you still need to invest in support contracts for open-source tools. The bigger challenge from my perspective is that it’s still rare for enterprises to contribute regularly to open-source software communities. 


The Growing Cost of Non-Compliance and the Need for Security-First Solutions

Regulatory bodies across the globe are increasing their scrutiny and enforcement actions. Failing to comply with well-established regulations like HIPAA or GDPR, or newer ones like the European Union’s Digital Operational Resilience Act (DORA) and NY DFS Cybersecurity requirements, can result in penalties that can reach millions of dollars. But the costs do not stop there. Once a company has been found to be non-compliant, it often faces reputational damage that extends far beyond the immediate legal repercussions. ... A security-first approach goes beyond just checking off boxes to meet regulatory requirements. It involves implementing robust, proactive security measures that safeguard sensitive data and systems from potential breaches. This approach protects the organization from fines and builds a strong foundation of trust and resilience in the face of evolving cyber threats. ... Many businesses still rely on outdated, insecure methods of connecting to critical systems through terminal emulators or “green screen” interfaces. These systems, often running legacy applications, can become prime targets for cybercriminals if they are not properly secured. With credential-based attacks rising, organizations must rethink how they secure access to their most vital resources.


Researchers unveil nearly invisible brain-computer interface

Today's BCI systems consist of bulky electronics and rigid sensors that prevent the interfaces from being useful while the user is in motion during regular activities. Yeo and colleagues constructed a micro-scale sensor for neural signal capture that can be easily worn during daily activities, unlocking new potential for BCI devices. His technology uses conductive polymer microneedles to capture electrical signals and conveys those signals along flexible polyimide/copper wires—all of which are packaged in a space of less than 1 millimeter. A study of six people using the device to control an augmented reality (AR) video call found that high-fidelity neural signal capture persisted for up to 12 hours with very low electrical resistance at the contact between skin and sensor. Participants could stand, walk, and run for most of the daytime hours while the brain-computer interface successfully recorded and classified neural signals indicating which visual stimulus the user focused on with 96.4% accuracy. During the testing, participants could look up phone contacts and initiate and accept AR video calls hands-free as this new micro-sized brain sensor was picking up visual stimuli—all the while giving the user complete freedom of movement.


Creating SBOMs without the F-Bombs: A Simplified Approach to Creating Software Bills of Material

It's important to note that software engineers are not security professionals, but in some important ways, they are now being asked to be. Software engineers pick and choose from various third-party and open source components and libraries. They do so — for the most part — with little analysis of the security of those components. Those components can be — or become — vulnerable in a whole variety of ways: Once-reliable code repositories can become outdated or vulnerable, zero days can emerge in trusted libraries, and malicious actors can — and often do — infect the supply chain. On top of that, risk profiles can change overnight, making what was a well considered design choice into a vulnerable one almost overnight. Software engineers never before had to consider these things, and yet the arrival of the SBOM is making them do so like never before. Customers can now scrutinize their releases, and then potentially reject or send them back for fixing — resulting in even more work on short notice and piling on pressure. Even if the risk profile of a particular component changes between the creation of an SBOM and a customer reviewing it, then the release might be rejected. This is understandably the cause of much frustration for software engineers who are often already under great pressure.


Risk & Quality: The Hidden Engines of Business Excellence

In the world of consultancy, firms navigate a minefield of challenges—tight deadlines, budget constraints, and demanding clients. Then, out of nowhere, disruptions such as regulatory shifts or resource shortages strike, threatening project delivery. Without a robust risk management framework, these disruptions can snowball into major financial and reputational losses. ... Some leaders see quality assurance as an added expense, but in reality, it’s a profit multiplier. According to the American Society for Quality (ASQ), organizations that emphasize quality see an average of 4-6% revenue growth compared to those that don’t. Why? Because poor quality leads to rework, client dissatisfaction, and reputational damage. ... The cost of poor quality is substantial. Firms that don’t embed quality into their culture ultimately face consequences like customer churn, regulatory fines, and declining market share. Additionally, fixing mistakes after the fact is far more expensive than ensuring quality from the outset. Organizations that invest in quality from the start avoid unnecessary costs, improve efficiency, and strengthen their bottom line. As Philip Crosby, a pioneer in quality management, stated, “Quality is free. It’s not a gift, but it’s free. What costs money are the unquality things—all the actions that involve not doing jobs right the first time.” 


Enabling a Thriving Middleware Market

A more unified regulatory approach could reduce uncertainty, streamline compliance, and foster an ecosystem that better supports middleware development. However, given the unlikelihood of creating a new agency, a more feasible approach would be to enhance coordination among existing regulators. The FTC could address antitrust concerns, the FCC could promote interoperability, and the Department of Commerce could support innovation through trade policies and the development of technical standards. Even here, slow rulemaking and legal challenges could hinder progress. Ensuring agencies have the necessary authority, resources, and expertise will be critical. A soft-law approach, modeled after the National Institute for Standards and Technology (NIST) AI Risk Management Framework, might be the most feasible option. A Middleware Standards Consortium could help establish best practices and compliance frameworks. Standards development organizations (SDOs), such as the Internet Engineering Task Force or the World Wide Web Consortium (W3C), are well-positioned to lead this effort, given their experience crafting internet protocols that balance innovation with stability. For example, a consortium of SDOs with buy-in from NIST could establish standards for API access, data portability, and interoperability of several key social media functionalities.


How to Supercharge Application Modernization with AI

The refactoring of code – which means restructuring and, often, partly rewriting existing code to make applications fit a new design or architecture – is the most crucial part of the application modernization process. It has also tended in the past to be the most laborious because it required developers to pore over often very large codebases, painstakingly tweaking code function-by-function or even line-by-line. AI, however, can do much of this dirty work for you. Instead of having to find places where code should be rewritten or modified in order to optimize it, developers can leverage AI tools to look for code that requires attention. ... When you move applications to the cloud, the infrastructure that hosts them is effectively a software resource – which means you can configure and manage it using code. By extension, you can use AI tools like Cursor and Copilot to write and test your code-based infrastructure configurations. Specifically, AI is capable of tasks such as writing and maintaining the code that manages CI/CD pipelines or cloud servers. It can also suggest opportunities to optimize existing infrastructure code to improve reliability or security. And it can generate the ancillary configurations, such as Identity and Access Management (IAM) policies, that govern and help to secure cloud infrastructure.


Balancing Generative AI Risk with Reward

As businesses start evolving in their use of this technology and exposing it to a broader base inside and outside their companies, risks can increase. “I’ve always loved to say AI likes to please,” said Danielle Derby, director of enterprise data management at TriNet, who joined Rodarte at the presentation. Risk manifests “because AI doesn’t know when to stop,” said Derby, and you, for example, may not have thought about including a human or technology guardrail to keep it from answering a question you hadn’t prepared it to be able to accurately manage. “There are a lot of areas where you’re just not sure how someone who’s not you is going to handle this new technology,” she said. ... Improper data splitting can lead to data leakage, resulting in overly optimistic model performance, which you can mitigate by using techniques like stratified sampling to ensure representative splits and by always splitting the data before performing any feature engineering or preprocessing. Inadequate training data can lead to overfitting and too little test data can yield unreliable performance metrics, and you can mitigate these by ensuring there is enough data for both training and testing based on problem size, and using a validation set in addition to training and test sets.


Why Cybersecurity-as-a-Service is the Future for MSPs and SaaS Providers

For MSPs and SaaS providers, adopting a proactive, scalable approach to cybersecurity—one that provides continuous monitoring, threat intelligence, and real-time response—is crucial. By leveraging Cybersecurity-as-a-Service (CSaaS), businesses can access enterprise-grade security without the need for extensive in-house expertise. This model not only enhances threat detection and mitigation but also ensures compliance with evolving cybersecurity regulations. ... The increasing complexity and frequency of cyber threats necessitate a proactive and scalable approach to security. CSaaS offers a flexible solution by outsourcing critical security functions to specialized providers. This ensures continuous monitoring, threat intelligence, and incident response without the need for extensive in-house resources. As cyber threats evolve, CSaaS providers continuously update their tools and techniques, ensuring we stay ahead of emerging vulnerabilities. CSaaS enhances our ability to protect sensitive data and allows us to confidently focus on core business operations. As threats evolve, CSaaS providers continually update their tools and techniques, ensuring companies stay ahead of emerging vulnerabilities. ... Embracing CSaaS is essential for maintaining a robust security posture in an increasingly complex digital landscape.


Meta: WhatsApp Vulnerability Requires Immediate Patch

Meta has voluntarily disclosed the new WhatsApp vulnerability, now published as CVE-2025-30401, after investigating it internally as a submission to its bug bounty program. The company says there is not yet evidence that it has been exploited in the wild. The issue likely impacts all Windows versions prior to 2.2450.6. The WhatsApp vulnerability hinges on an attacker sending a malicious attachment, and would require the target to attempt to manually view the attachment within the software. A spoofing issue makes it possible for the file opening handler to execute code that has been hidden as a seemingly valid MIME type such as an image or document. That could pave the way for remote code execution, though a CVE score has yet to be assigned as of this writing. ... The WhatsApp vulnerability exploited by Paragon was a much more devastating zero-click (and one that targeted phones and mobile devices), similar to one exploited by NSO Group on the platform to compromise over a thousand devices. That landed the spyware vendor in trouble in US courts, where it was found to have violated national hacking laws. The court found that NSO Group had obtained WhatsApp’s underlying code and reverse-engineered it to create at least several zero-click vulnerabilities that it put to use in its spyware.

Daily Tech Digest - December 26, 2017

Bitcoin, Blockchain and Gold: A Year of Disruption


Genesis Mining is the largest cloud bitcoin mining company and is growing rapidly, adding 50,000 people a day, and now has more than 2 million users. HIVE has an exclusive arrangement with Genesis to operate its data centers, so HIVE offers the opportunity to invest in the cryptocurrency sector without investing in bitcoins themselves. ... Another factor is the success of the quantitative approach, which results in very low volatility. Quants have upended the market using artificial intelligence and other algorithms. Using data mining, algorithms will direct trades within seconds. Some 60% of daily trades are now quant driven, and the average holding period of a stock is about four days. So, things are moving at a much faster pace. Here’s one example: CIBC looked at 200 case studies and found that when a company’s press release had one of seven different headlines, the stock would fall 25%.



Four tips for moving to a new managed SD-WAN service provider


Knowing what went wrong can help customers specifically select a new provider to fix the problem. Perhaps the existing provider's chosen platform turned out to be a bad fit for the customer's actual WAN usage. Maybe the provider failed to meet service-level agreements. Perhaps the provider's support organization is understaffed or experiencing major turnover that affects service. Or, maybe the provider is bad at managing third parties, like last-mile connectivity providers, for example. Whatever the reason, make sure the wisdom of experience informs the selection of the new managed SD-WAN service provider. ...  If the provider makes the change-over experience difficult, it loses any chance of winning other areas of your business, winning you back in the future or getting a good recommendation from you.


Small and Medium Business Security Strategies, Part 1: Introduction


This look is to be expected at this point. You might be asking something along these lines: What are secure configurations? How can I possibly understand “Limited Admin Privilege?” Seriously, what is vulnerability assessment and remediation? We are going to start slow, set realistic goals and will work together to get your network under control. So where to from here? No one has time, no one wants extra duties and everyone has to step in and participate. Based on experience, most offices, businesses, schools, et cetera have someone around who knows about computers. This person is usually the go-to resource for broken printers, blue screened workstations and internet outages. This person is an asset and should serve as a guide for this process. They can answer questions and will definitely know what a modem, switch and router look like.


5 Digital Transformation Trends for Healthcare Industry in 2018

Digital Transformation trends in Healthcare industry
When we are talking so much about the digital transformation in the healthcare sector, do we really know what it is and how it helps? Digital transformation is not just about buying new technologies and latest tools that can ease the process of healthcare procedures, but it is also about changing the operational process in the healthcare sector and making it more automatic and efficient. Digital transformation focuses more on information handling. If the information gathered from the modern digital devices can be channelized effectively, it can be structured to give automated responses to the existing and upcoming health problems. Merely purchasing a technology will not produce any result. There must be a plan about how we are going to use that technology to engage with the whole process of health care revamp. In some parts of the world, the planning to engage digitization with healthcare has started to produce results.


Could Blockchain Transform the Direct Selling of Insurance and Annuity Products

Distributed Ledger Technology Beyond the Hype?
Currently, there is no central authority maintaining or clearing these transactions. Each broker-dealer must maintain their own book of business. For a registered broker-dealer FINRA 17a-3 regulations require the on-going maintenance of client books and records, which is a process that would be greatly simplified by allowing the client to maintain their information directly through to the ledger. This would replace the current method of mailing letters and the subsequent manual process of updating account owner and registration information. Of course, challenges remain for the wider use of DLT until solutions for issues such as scalability and the integration with legacy systems are developed. The development of APIs is necessary to provide the integration with off-chain legacy business applications. These challenges limit wider implementation of the technology.


The 6 stages of an Advanced Ransomware Threat attack


In the ransom phase, attackers deploy ransomware to data stores where target business data resides. The ransom is timed for the date when it will have the most impact, such as just before a major announcement, during mergers and acquisitions, or surrounding audits. They may use any flavor of ransomware as long as it effectively makes the data unavailable and gives the attacker the only keys to decrypt it. ... As mentioned, there is no single form of ransomware used in ARTs. Attackers may perform the encryption using custom programs or they may use a combination of ransomware variants to encrypt data on different types of devices such as Macs or Linux servers. Finally, be aware that some attackers use ARTs as a diversion. The attackers may have already stolen the data they wanted, so they manually infect the systems with ransomware, counting on the company to wipe machines and restore from backup, thus erasing any remaining evidence of the cybercriminal’s presence.




Mobile Internet is Helping Chinese Financing Soar to New Heights

As an “upstart” rising sharply in the field of the global Internet, the developmental speed of Internet finance in China is also amazing. The number of people purchasing Internet financial products has reached 500 million, and the scale of Internet asset management has seen eight-fold increase over the last 4 years. More Chinese people directly manage their financial matters on a PC and mobile device, instead of going to the bank. Practitioners of Internet finance represented by Tianhong Asset, based on the Alipay platform of the controlling shareholder Ant Financial, have realized efficient interaction and promotion both online and offline, providing their investors with a more efficient and convenient financial experience.In China’s Internet finance industry, Tianhong Asset has set up the industry’s first cloud-based De-IOE large settlement system – Tianhong Asset Management Cloud Direct Marketing System.


AI technology adoption: What’s holding us back?


While AI adoption is beginning to prove its usefulness in many industries, it needs to have access to the right data in order have meaningful impact. Strong data quality underpinning machine learning will amplify the efficiency of organisation, and never has the phrase ‘quality in, quality out’ been more relevant. While organisations have mountains of data to sift through, strong data quality underpinning machine learning will amplify the efficiency of your business. In terms of critical success factors, context sits right alongside data quality. Chatbots are a good example of efficient machine learning currently in implementation, and of the importance of context in producing a useful outcome. While the term ‘self-learning’ is often subscribed to AI it can be misleading. It is only by acquiring more data over time and understanding the context in which this information can be accurately used that AI can improve overall performance and start to deliver real business value


Cloud API And Management Platforms & Middleware Market To Increase

electronics
Cloud API and management platforms and middleware market is fast picking up pace due to wide adoption of mobile and cloud applications for back-end services and wide adoption of microservices based architectures by enterprises. These applications comprise of small, independent processes that communicate via API, thereby creating demand for cloud API and management platforms. Emergence of Internet of Things also has a pivotal role to play in the growth of cloud API market as more number of connected devices requires more analysis and monitoring at a fast rate that can be done through cloud APIs. In addition to this several start-ups and small scale enterprises are adopting assemble from components methodology that enables end users to use APIs to connect apps to IT assets, resulting in growth in usage of cloud APIs.


New Seagate tech promises to double hard drive speeds

New Seagate tech promises to double hard drive speeds
Seagate has a high-density technology in the works called Heat-Assisted Magnetic Recording (HAMR), set to launch next year and in volume in late 2019, that will deliver hard disks of 20TB and more. Western Digital has something similar in the works at roughly the same timeframe. These drives will have up to eight storageplatters, and performance overall will suffer if the network is choking while waiting for data to load. “Capacity is only half of the solution. If the ability to rapidly access data doesn’t keep pace with all that capacity, the value potential of data is inhibited. Therefore, the advancement of digital storage requires both elements: increased capacity and increased performance,” Seagate said in its blog. The only way to make hard drives faster was to make them read and write faster without spinning the disk at a higher speed, so the multiple actuator drive technology makes sense.



Quote for the day:


“Learning does not make one learned: there are those who have knowledge and those who have understanding. The first requires memory and the second philosophy.” -- Alexandre Dumas


Daily Tech Digest - December 14, 2016

Public vs. Private vs. Hybrid Cloud - Exploring the use Cases

Despite some of the challenges and associated costs of the private cloud model, many bigger firms are compelled to choose private due to the security risks of public. The potential damage to a company’s brand and the loss of customer trust after a public cloud breach can exponentially surpass the costs of the private cloud. ... Implementing a private cloud securely can prove difficult unless you utilize the help of a third-party service. This is where a qualified IT consultancy such as TechBlocks can provide critical guidance on the best practices for implementation, and perhaps discuss the case for a hybrid public-private approach. ... The hybrid cloud is increasingly the path for organizations that desire a customizable approach with reduced maintenance costs and time. Pursuing a hybrid approach is often the path IT will take to convince upper management that the cloud is safe and a good option for critical data.


The mainframe is hindering application delivery

“Organisations face both business and technical challenges on the mainframe, preventing them from innovating and transforming into a digital business. To avoid issues with the mainframe, organisations are working around it, re-platforming, or modernising. However, each of these tactics creates new issues. The good news is that those companies embracing DevOps deliver faster and at a higher quality, all while fostering collaboration,” said Compuware CEO Chris O’Malley Compuware, which commissioned the study, has been aggressively leading the transformation of the mainframe into a fully Agile and DevOps-enabled platform where development, testing and operations processes can occur at the same rapid pace as they do on distributed and cloud platforms.


10 Clear Principles for the 96% that Need Culture Change

“Although it’s important to engage employees at every level early on, all successful change management initiatives start at the top, with a committed and well-aligned group of executives strongly supported by the CEO.” It is imperative for the top team to be on the same page regarding both why the change is necessary and “the particulars for implementing it.” The top leader or any member of the top team will dramatically undermine change efforts if they are directly or indirectly sending messages that are in conflict with the change effort. They must act in a different way that’s consistent with the change effort and visible to all. ... “Mid-level and frontline people can make or break a change initiative. The path of rolling out change is immeasurably smoother if these people are tapped early for input on issues that will affect their jobs.”


Advocate Congress establish a permanent joint committee on information technology

This joint committee was formed in response to both a dramatic threat and an incredible opportunity. The threat was the potential of nuclear war. The opportunity was the potential to use nuclear science to generate electricity to power cities as well as naval vessels, as well as opportunities to use nuclear science in medicine and industry. It was clear to congress at the time that success in response to the threat and success in gaining national benefit from nuclear energy would require a different way of doing things. So, the response was the United States Atomic Energy Act of 1946. For over 30 years the Joint Committee this act set up provided bi-partisan solutions broadly supported and widely credited with bringing unity of effort to many multiple complex activities.


DevOps capabilities vary widely by industry vertical

DevOps maturity varies according to the business sphere that companies occupy, and some are constrained by the characteristics of their markets -- from heavy regulation in the financial services and life sciences industries to stifling technical debt in the retail and media and entertainment sectors. Other markets, such as healthcare and transportation, face unique cultural challenges to bringing a DevOps mindset to the software development process. ... The philosophy of increased IT automation and collaboration between development and operations -- which, in some industries, are no longer separate groups at all -- is here to stay. "Consumers, empowered by rich software interactions with access to internet resources, have never had more power or choices," wrote Forrester Research analysts in their report "The State of DevOps Industry Adoption for 2016 -- Where's the Heat?"


Nine Questions to Ask to Determine IoT Device Safety

While IoT brings forth many benefits to consumers—from convenience to energy efficiency, to monitoring babies and locating lost pets—it also brings risk. ... These IoT devices were used them to take out the Dyn DNS Server this September. As a consumer, you might think… “why should I care if my device is involved in a DDoS attack? As long as it works, I don’t mind.” Well, some 20,000 residents in Finland found out the hard way why it matters, when their building’s IoT connected thermostats stopped functioning because the devices were enslaved to a botnet conducting a DDoS attack (By the way, it’s cold in Finland in November). Whether you are a consumer considering a connected device as a gift for the holidays, or a reporter about to review the next wave of IoT devices launching at CES, we have put together a list of questions you should ask before diving in:


Why soft skills outweigh hard skills for IT-business collaboration

The skills needed in IT change so frequently that businesses are more interested in finding qualified candidates with strong soft skills -- workers who can grow and adapt in a quickly changing landscape, says Palm. Qualified workers can always take a course or complete training in areas where they need more knowledge, but it's not as easy to teach someone how to be collaborative or to communicate effectively. Palm says she's seen an increase in applicants that fit this "t-shaped personality," which means "an individual has a broad set of skills, but only a few areas where the skillset goes deep." T-shaped workers are the type of employees who are "agile and able to rapidly adapt to new changes," she says. They constantly adjust to new and uncharted territory, learn new skills as needed and stay up to date on emerging trends.


Don't Like Russian Cyberspies? Tips To Stop State-Sponsored Hackers

“Customers are looking for a magical button to stop all these threats,” he said. Businesses will then buy the tools and assume they’re safe, when in reality they aren’t properly being used. For example, many businesses often fail to install security patches with their IT products -- including the antivirus software -- exposing them to hacks that otherwise could have been prevented. They may also ignore the warnings that pop up from security software, believing them to be a false positive. Or they’ll even forget to turn the software on.  However, in other cases, the businesses had limited expertise on staff to deal with the cyberthreats the security tools encountered. “If you buy the tools without hiring the right people, you are not going to solve your nation-state hacking problem,” Firstbrook said.


Blockchain – The Next Big Thing for Middleware

Fascinating new technologies are emerging these days. Everybody talks about cloud, containers, big data and machine learning. Another disrupting technology is blockchain. You might have heard about blockchain as the underlying infrastructure of Bitcoin. But Bitcoin is just the tip of the iceberg. This article explains the use cases and technical concepts behind blockchain, gives an overview about available services, and points out why middleware is a key success factor in this space. ... Welcome to the world of blockchain where smart contracts process such a scenario automatically and in a secure way. Governments in conjunction with global non-profit airline associations like International Air Transport Association (IATA), which “support aviation with global standards for airline safety, security, efficiency and sustainability,” could enforce airlines to compensate customers automatically as it is defined by law.


Google Tries To Advance IoT Security With Android Things

Android Things comes after the world got some more glimpses into how insecure many products can be. IoT devices were used to take down popular websites on the East Coast (and elsewhere) in October. Then in November, critical vulnerabilities were discovered in popular IoT cameras--a problem that repeated itself when backdoors were found in Sony's internet-connected cameras in early December. The IoT market had a bad couple of months. These issues have led to calls to improve the security of IoT devices. The problem is that many companies drag their feet in responding to problems, lack the infrastructure to push updates to devices that have already been sold, or simply don't care about the security of their products. Making sure these devices are safe for their owners and for the internet at large just isn't a priority for the manufacturers churning them out.



Quote for the day:


"Most people who sneer at technology would starve to death if the engineering infrastructure were removed." -- Robert A. Heinlein


May 31, 2016

Will blockchain make the leap from cryptocurrency to smart machines?

This is despite something of a crisis of confidence in Bitcoin’s own cryptocurrency heartland, where some insiders are arguing that the model and the specific software architecture have been tested and found wanting. Some of the issues they raise inevitably apply to the extension of the Blockchain to IoT; if, as they allege, the Blockchain is itself failing to scale to support its core business, then it’s not going to be much good for IoT either. There are also concerns about the processing power and the associated electrical energy that would be needed to perform the encryption needed for all those objects. The underlying data for a blockchain-based IoT application doesn’t have to be stored on a centralized server architecture paid for by the enterprise, but it still has to be stored — and the need to maintain multiple copies surely increases rather than obviates the storage requirement.


Death or rebirth: What does the future of the PC really look like?

Microsoft's vision was to put a (Windows) computer on every desk and in every home. It pretty much managed it, at least in the richer parts of the world. But many of those PCs - especially the ones at home - are now forgotten and covered with dust. That's because smartphones and tablets are easier and quicker to use, and can do the vast majority of things you can do with a PC. Indeed there are plenty of things that a standard PC or laptop cannot do that a smartphone can. To put it another way: PC makers have struggled - and most failed to answer the question posed by Apple CEO Tim Cook last year: "Why would you buy a PC anymore? No really, why would you buy one?" Now this doesn't mean the PC is dead: selling 232 million this year shows that. But it does mean that the PC is going to change, and so will PC makers.


Gartner's Litan Analyzes SWIFT-Related Bank Heists

Litan, who recently blogged about the lessons the SWIFT-related heists should teach U.S. banks about authentication weaknesses and lacking security controls, says banks need to implement the same controls for interbank transactions that they have in place for customer-to-bank payments. Fraud detection and risk mitigation is a shared responsibility, she adds. "We read a lot in the media about finger pointing, where SWIFT was saying it was the banks' responsibility and the banks were saying it was SWIFT's responsibility," Litan says. "Everyone needs to wake up and realize this is a shared responsibility."


What did one car say to the other car? If you make that turn I'll hit you!

It works because your car and that pickup are exchanging their location, speed, acceleration, direction and steering faster than we can blink. Many consider this conversation -- called vehicle-to-vehicle, or V2V -- the most important lifesaving technology to hit the auto industry in the past 10 years. If V2V did nothing more than warn you not to turn left or enter an intersection, it could prevent about half a million crashes and save around 1,100 lives a year, according to the National Highway Traffic Safety Administration. But automakers, universities and government organizations are exploring V2V for more than just intersection safety. ... "Whoa, don't pass that horse trailer, because there's oncoming traffic you can't see." Because of benefits like these, the US Department of Transportation is pushing automakers to adopt V2V within the next few years.


EMC and smaller players planning open-source storage middleware

The company has been quietly updating its community website, emccode.com, with a roadmap via the GitHub code repository. That's a long way from when the old hardware-centric EMC began its storage diversification push more than a decade ago, at the time being ribbed as "Expensive, Monolithic, Closed" by then-new storage networking competitor Sun Microsystems. Bernstein said EMC's early successes in open-source storage include Rex-Ray, which links containers to storage, and Polly, which provides storage resource management to virtual machines. His team will keep churning out open-source storage container projects, including some contributed by customers, as long as the container market keeps developing. "The biggest challenge right now is there's a lot of fragmentation in the market. There's no clear winner," he observed.


Raspberry Pi: The smart person's guide

Windows was another recent addition to the board. The Pi runs Windows 10 IoT Core, a cut-down version of Windows 10, not designed to run a desktop PC but instead to help hardware hackers prototype Internet of Things (IoT) appliances using the Pi. Not only are there three different generations of Pi but there are two primary models, the Model B and the lesser specced Model A. The Model A lacks Ethernet, has less memory than the B and only has one USB port. However, it sells for the lower price of $25 and draws less power. Generally the Pi 3 is the better choice than the Pi 2, as it's more powerful and is the same price. However, the Pi 1, while a good deal less powerful, is cheaper than the Pi 3, and also available in the more compact, less power hungry Model A configuration. That said, a Pi 3 Model A is due to be released this year.


Back-end integration a struggle for IoT companies

Augury is exploring several different possibilities including reducing the cost of diagnostics for commercial repair firms, improving customer outreach for appliance vendors and enabling new insurance models. The company has already lined up contracts with some of the largest HVAC repair companies in the U.S. for the on-demand diagnostic service. Yoskovitz expects appliance makers to eventually embed low cost sensors into their washing machines and refrigerators. This would make it easier to proactively send out repair technicians or recommend upgrades when machines have reached the end of their life. "After the one-year warranty, most manufacturers lose contact with the customer," he said. "If anything goes wrong, a customer will call someone on Craigslist, and the spare parts will be Chinese knockoffs.


Parallel Processing and Unstructured Data Transforms Storage

New approaches to application virtualization are also having a revolutionizing effect on the use of data storage. Operational requirements for big data analytics on unstructured data is driving the adoption of "application specific storage architectures" and real-time storage configurability. Tiering is also an enabler for the adoption and efficient operational deployment of container and microservice technologies. This reality presents a compelling case for rapid enterprise adoption of advanced tiered storage architectures. When selecting storage technologies, the smart money goes to those solutions that support the industry’s need for high performance and economical high density online storage. In order to enable the highest degree of storage automation, the solution should be able to manage the various storage technologies through a consistent interface or API.


Devops: A Culture or Concrete Activity?

The DevOps philosophy cannot be entirely divorced from processes, much like the branches of a tree cannot be disassociated with the trunk. This is where development models come into play. Schmidt supplies the example of continuous delivery, which entails building a solution in such a way that it can be released at any point in production. This doesn't necessarily mean that it has to be released in its crudest form, only that it hypothetically could, and that any potential loose ends would be tied up. Achieving this model requires extremely well-choreographed collaboration among developers, QA management, designers and other departments – so basically, an unremitting adherence to the DevOps philosophy.  Continuous delivery is essentially agile software development testing on steroids. The objective of agile is still to make defined builds for delivery.


Exercises for Building Better Teams

The concept of work organization has been evolving for years. Not only agile practitioners have discovered that self-organized teams are highly effective. A strong manager is not a requirement for a well-performing team, but that does not mean that self-organized teams lack leadership. ... To ensure that such a balance exists, Alexis Phillips and Phillip Sandahl proposed a Team Diagnostic model based on Blake’s leadership grid. They translated “concern for people” at the management side to a measurement of team positivity that reflects team spirit and joy of work. They transformed “concern for result” into team productivity, which means effectiveness in delivering results. They identified critical competencies for each of those areas and it is amazing how well this list aligns with the agile mindset.



Quote for the day:


"Don't expect to build up the weak by pulling down the strong." -- Calvin Coolidge


November 07, 2015

Why use NGINX as a load balancer?

In addition to being free, scalable, and easier to maintain, the key reason many organizations want an open source load balancer is that it provides a more flexible development environment, which helps organizations adopt a more agile development process. Sarah says that when compared with other options, NGINX offers huge performance improvements. "With NGINX, organizations can deliver applications reliably and without fear of instance, VM, or hardware failure," she says. "This is crucial as websites and applications make their way into our everyday lives." In the typical setup in most organizations, web server and ADC (application delivery components, often hardware) are separate components. But when it comes to web application delivery, NGINX is changing that approach.


Cyber-security: The cost of immaturity

All sorts of companies offer cyber-security services, from small, specialist outfits to giant arms companies such as BAE Systems (which TalkTalk has hired to sort out its mess). The biggest firms are finding it hard to keep staff. As in the public-relations and corporate-intelligence industries, if you know your stuff, you can make more money starting up on your own. Venture-capitalists are not showering money on the industry as prodigiously as they did a year ago, but the fast growth rate means that raising capital is still easy. The big companies are still able to trade on their brand name (nobody gets fired for hiring IBM) but the mammals are beating the dinosaurs.


Mobile Collaboration: Where Does It Rank on Your Priority List?

Collaboration is a major driver for increased mobile usage. We’ve used chat and messaging tools to maintain personal connections for years with popular apps like FaceTime, WhatsApp, Voxer, and many others. With their success in fostering simple and straightforward communication, it was only a matter of time before these apps found their way into the business world. This “consumerization of IT” is paving the way for new breeds of business-class technology to redefine the boundaries of the workplace. But it is more than technology fueling this shift. The way teams form and work together has changed as well.


How data screwups may decide the fitness tracker wars

The fitness tracker war won't be won or lost on hardware design, app graphics, the ability to track exercise and sleep or the response to a well-heeled rival like Apple or Samsung. The fitness tech game will be won on how well vendors handle your data. Every relationship has a breaking point -- the one moment when you say enough and move on forever. Apply that axiom to the fitness tracking industry and the breaking point is when your favorite wearable loses your data. In recent months, I've suffered data losses about a dozen times across two vendors. If you use a fitness tracker, there's nothing worse than going for a run, hitting 20,000 steps and watching the app give you credit and then refresh and lose the information. It's like the run never happened.


Mocking Financial Middleware System

Integration is primarily core part of any financial application as either way you have to integrate with banking host or middle ware and this is not an easy job at all. Do keep in mind that Host systems usually refers to Core Banking Systems and Middle ware is actually channel integrator that talks to host systems like ATM, SMS, phone banking, IVR, WAP, etc In the development environment the most critical challenge is to write integration code offsite because middleware or host systems are not available at development centers. This critical limitation forces companies to do all the integration onsite which obviously increase the development and post production support cost because in majority of cases fixes needs to be investigated onsite.


Delivering Software with Water-Scrum-Fall

Agile is a mindset, a set of guidelines, Scrum is a framework that can be deployed, it has strict rules and events to follow, they are not the same thing. Agile thinking exposes our inability to deliver fast, it drives out what the customer actually wants and improves quality by providing multiple opportunities for continuous feedback, which in turn focuses the developers towards how to build the right thing right. There some simple things you should start to be aware of and look to change. Firstly collaboration. This has to start with engaging your customers, they need to understand how you work, especially if you are going to use Scrum or even just looking to change the way you work. Your customers will need to understand what is capable and what role they will need play.


Jane Austen on Python: The intersection of literature and tech

Creative thinking is required to determine where our code might break, to build in checks, and to return useful data. Any programmer can return the error message, but to compose an error message that is helpful—rather than intimidating—requires a programmer who is also a creative thinker and possesses excellent written and verbal communication skills. When you're writing good tests, you're doing world building: Accessibility requires empathy, and empathy requires imagination. You can leverage that awesome feeling you get when you get lost in a book and identify with the main character by putting yourself in the shoes of the people using your code. Imagine their struggles and frustrations. Create a persona for them. Fix the things that hinder or annoy them about your app.


Why Ford is shifting its focus from cars to 'mobility'

Ford has taken several steps to address these larger transportation issues. A big piece is a new focus on e-bikes. In June, they unveiled the MoDe:Flex, a versatile bike that can be used in different needs such as the road, mountain or city riding. Another is "GetAround"—in which customers who finance through Ford credit can allow vehicles to become part of a peer-to-peer carsharing service. The company's innovate mobility series challenged cities around the world to solve different mobility problems, specific to local communities. In Mumbai, for example, the problem was how to get around in monsoon season. The solution: Using data you could get from the car. Windshield wipers, Klampfl said, could indicate heavy rain in different areas.


A Twitter app for rural Kenyan potato farmers

“SokoShambani is a market-based micro-logistics platform that enables small-scale farmers to trade directly with high value market entities,” explains Stephen Kimiri, the CEO and developer. Farmers subscribe to a free SMS service on the 8988 short-code powered by Twitter and @ViaziSouthRift. “Through this, they are able to trade directly while sharing farming intelligence, market reports and updates,” explains Kimiri. The startup has singled out potato farmers in rural Kenya after noting that potato is a staple food second only to maize in Kenya and that farmers are usually given a raw deal for their produce in the markets. “With such a small number of big consumers with stable and obvious demand, an ineffective supply chain and large number of small-scale participants, it makes this value chain particularly ‘ripe’ for intervention,” says Kimiri.


SharePoint Server 2016: IT's Ultimate Swiss Army Knife?

Information technology leadership faces unique strategy con­cerns that simply didn't exist 10 years ago. Managing the relationship between cloud, mobile and on-premises applications requires an eye toward innovation and change -- and a willingness to take a few risks to improve business practices. A recent Gartner Inc. report, "Flipping to Digital Leadership: Insights from the 2015 Gartner CIO Agenda Report," best describes the challenges of IT decision makers. "Seizing this opportunity requires flipping long-held behaviors and beliefs -- from a legacy perspective to a digital one in information and technology leadership, from a focus on the visible to the genuinely valuable in value leadership, and from control to vision in people leadership."



Quote for the day:


"Nothing will ever be attempted if all possible objections must first be overcome." -- Samuel Johnson


December 20, 2014

Roy Fielding on Versioning, Hypermedia, and REST
Anticipating change is one of the central themes of REST. It makes sense that experienced developers are going to think about all of the ways that their API might change in the future, and to think that versioning the interface is paving the way for those changes. That led to a never-ending debate about where and how to version the API. ... This is precisely the problem that REST is trying to solve: how to evolve a system gracefully without the need to break or replace already deployed components.


Buckle up IT: The enterprise needs you for cloud adoption
"When you're talking about applications and services that are important to the enterprise, it's crucial to have IT in the loop so that they can assess security, performance and availability risk factors," Olds said. "Sort of like having your doctor by your elbow at the buffet. It's not as satisfying, but you'll be healthier in the long run." Allan Krans, an analyst with Technology Business Research, noted that the days of companies having multiple silos of information and applications running without IT's involvement or knowledge should be coming to an end in the coming year.


Dear Enterprises: Now Is The Time To Get Freelance Work Right
The writing is on the wall. In 2015, enterprises MUST get it right when it comes to managing independent contractors. More companies than ever are turning to freelancers and independent contractors — especially large, billion-dollar enterprises who can capture significant business value from deploying a flexible, non-employee workforce. Since we first started Work Market in 2010, we’ve been helping enterprise companies navigate the nuances of independent work. Fast forward five years, and we’re seeing a greater number of enterprises turn to independent workers than ever before.


SQL Zip Compression, RegEx and Random Functions
While SQL Server natively supports storing data as compressed, with this library we are able to achieve goals that transcend any one application layer. ... The biggest advantage is that since the data is stored and delivered compressed, it is low impact on SQL (both Disk I/O and CPU) and the network to deliver the data to the client. This opposed to SQL native compression where SQL compresses on receive and decompresses on send, the network then recompresses while sending, then the client decompresses the network packet on receive.


Wouldn’t it be fun to build your own Google?
Imagine you had your own copy of the entire web, and you could do with it whatever you want. (Yes, it would be very expensive, but we’ll get to that later.) You could do automated analyses and surface the results to users. For example, you could collate the “best” articles (by some definition) written on many different subjects, no matter where on the web they are published. You could then create a tool which, whenever a user is reading something about one of those subjects, suggests further reading: perhaps deeper background information, or a contrasting viewpoint, or an argument on why the thing you’re reading is full of shit.


New Intel Platform Rich with Transformative Features
Many high-performance computing (HPC) users are familiar with Intel AVX 1.0, which increased floating point packet processing from 128 bit to 256 bit. Now Intel AVX2 doubles integer packet processing, from 128 bit to 256 bit. That essentially doubles your integer processing ability on the same clock speeds. This advance will drive new workload performance gains, particularly for the demanding HPC applications used in life sciences, physics, engineering, genomic research, data mining, and other types of compute-hungry scientific and industrial work. In our testing, we have seen up to a 1.9x increase in performance with Intel AVX2.[1],[2]


We Still Don’t Understand Very Well How Social Change Occurs in the Digital Age
he Internet is responsible for one of the paradoxes of the digital age. We are just a click away from having a friend in the antipodes, but we end up following friends who we already know from work, school or just around the corner. Ethan Zuckerman, director of the Center for Civic Media at MIT, denounces the lack of globalism on the web and alerts us to a few hazards that may cause damage to the democratic quality of our governments. Zuckerman proposes alternatives to the current model of Internet business and puts the magnifying glass on users in order to understand how social changes come about in the digital age.


Creating a Sales Dashboard with Bootstrap and ShieldUI
Although a complete Dashboard can include any possible combination of widgets and layout elements, I have picked the most widely used. From a layout perspective, the page is divided into responsive panels, the positioning of which is determined by the Bootstrap layout system. On smaller screens, each section is adequately positioned to occupy all of the available space. The widgets used are JQuery QR code, JQuery rating control, two different layouts for the graphs, utilizing a JQuery Chart plugin, a circular progress bar, and a grid. From a development perspective, I have used a simple html file, which hosts all the required code.


Artificial Skin That Senses, and Stretches, Like the Real Thing
Finally, in a further effort to make the materials seem more realistic, they added a layer of actuators that warm it up to roughly the same temperature as human skin. The new smart skin addresses just one part of the challenge in adding sensation to prosthetic devices. The larger problem is creating durable and robust connections to the human nervous system, so that the wearer can actually “feel” what’s being sensed. In a crude demonstration of such an interface, Dae-Hyeong Kim, who led the project at Seoul National University, connected the smart skin to a rat’s brain and was able to measure reactions in the animal’s sensory cortex to sensory input.


Alchemy: Message Buffer
One topic that has been glossed over up to this point is how is the memory going to be managed for messages that are passed around with Alchemy. The Alchemy message itself is a class object that holds a composited collection of Datum fields convenient for a user to access, just like a struct. Unfortunately, this format is not binary compatible or portable for message transfer on a network or storage to a file. We will need a strategy to manage memory buffers. We could go with something similar to the standard BSD socket API and require that the user simply manage the memory buffer. This path is unsatisfying to me for two reasons:



Quote for the day:

"Ignorance is a death sentence for any leader as it eliminates the option to take action effectively." -- @ManagersDairy