When Time Is Of The Essence … But Lasting Value Matters Too

A Wichita Public Schools Case Study

Certain circumstances demand swift action. And there is nothing wrong with getting results when pushed by necessity. But real value comes when there are benefits that extend well beyond the original situation.

Take for instance our customer – the Wichita Public Schools. As the largest school district in the state of Kansas, it supports nearly 50,000 students and 7,000+ teachers and staff across more than 90 schools and special program sites.

NETWORK CONNECTWhen the pandemic hit, the school district needed to quickly switch to virtual learning and remote work. It was only natural for them to turn to NEC. After all they were already a customer and were successfully using our on-premises Unified Communications solution.

In just 4 weeks after the request for help came in, we were able to deliver a modern approach to hybrid accessibility and academic achievement with our Microsoft-certified NETWORK CONNECT for MS Teams intelligent call routing solution.

Teachers and staff could route calls directly to specific numbers in Teams as well as to their NEC UC platform’s desk phones, providing seamless accessibility regardless of location. Additionally with unmatched carrier redundancy, regulatory compliance and failover assurance, our solution successfully delivered rich and reliable home teaching, remote advising and simplified teacher-parent support.

Read the complete experience: Download your free copy of this case study or read our press announcement.

NETWORK CONNECT For MS Teams Long-Term Effect

But what started from necessity has also become a long-range win for the school district.

“The silver lining during these difficult times is that we’re better prepared for disruptions, weather emergencies and other fast-changing circumstances thanks to our partnership with NEC,” according to Rob Dickson, CIO of Wichita Public Schools.

Take a closer look at the Wichita Public Schools experience

NETWORK CONNECT Has So Many Uses To Keep Your Organization Up & Running

Interested in exploring what NETWORK CONNECT could do for your organization?

Contact Us Today

Digital Retail Signage Solutions Come of Age with Additional Technology Capabilities

When was the last time you went to a retail store and had a great experience? No, I don’t mean customer service or super cool music in the dressing rooms. When was the last time that you went into a retail store and had a memorable experience that made you smile, or gave you more information about a product or service in real time than you might have gleaned online?

Retailers in particular have become more intrigued with digital signage over the last decade. The shift to digital meant that organizations could change their pricing, menus, or other details in real time, without needing to change out physical signage. Today, three types of digital signage are typically used in customer-facing environments.

Passive Signage

Passive signage is what you would encounter at a fast food restaurant at the airport. This type of digital signage is typically displayed on one or more screens, and either remains static all the time or changes at set intervals, such as a digital menu board that switches over at noon. Passive signage can be hosted on a local machine or over the Internet, and it’s generally more cost effective than running print jobs every time your business wants to make a change. It’s very utilitarian. However, it lacks the interactive element that really draws people in.

Interactive Signage

Interactive signage takes many forms, and is generally designed to provide a level of user interaction by being “triggered” by an event. Think of an iBeacon that sends a coupon to your mobile phone when you walk into your favorite clothing store or displays information about a painting when you hold your smart phone up next to it at the museum. Another example is signage triggered by sensors – when you lift that bottle of Bordeaux at the wine store, perhaps a light sensor is triggered and you see a map and information about the wine on a screen. Or you hold a piece of clothing up to a mirror in the changing room, and its RFID tag triggers signage behind a translucent mirror suggesting other pieces that may go with the skirt. This type of signage is indeed interactive and can be engaging, but the engagement is not always intuitive and there’s typically only one level of engagement between the individual and the signage.

Intelligent Signage

This brings us to a new type of digital signage. What if there was a way to create compelling in-store experiences in which customers could interact with a truly intuitive digital system, perhaps even order products on-screen, while also feeding interaction data back to the retailer or business owner? It is for this use case that intelligent signage, perhaps today’s most cutting-edge technology in retail and digital advertising, was designed.

One of the most interesting of these new systems is Microsoft’s Inception solution, which uses the Microsoft Kinect sensor to detect an individual’s proximity from the sensor, his/her age and gender (using NEC biometric facial recognition technology), and his/her interaction with products on a shelf. Different distances and interactions can trigger different layers of contextual signage, such as static or video advertisement screens, product pricing, technical specs, user reviews from the web and more. The system also records anonymous data such as the demographics and engagement time of individuals with various products, allowing advertisers and business owners to better understand their audience and the effectiveness of their signage.

Intelligent signage systems use advanced yet inexpensive hardware including the Kinect sensor and lightweight PCs such as those embedded in NEC commercial displays, and they can be combined with sensors in the ceiling (in-store heat mapping) or at a point of sale. This solution gives the business owner a broad understanding of how people traverse a particular store or the ages and genders of customers who are buying which products, at which times of day. Intelligent signage systems introduce analytics for the real world, and it’s going to change the way that we experience in-store retail.

But analytics, particularly analytics using Big Data, require more than an intelligent signage system to provide the analytics that will make them most useful to retail companies. One option is using Azure StreamAnalytics. This new offering from Microsoft provides real-time insight into what products are attracting the attention of shoppers, how product interest varies by age and gender, and which displays are attracting the most attention. Stores will be able to tune the shopping experience to maximize sales. Microsoft Azure StreamAnalytics is a new addition to the company’s Azure IoT (Internet of Things) Suite enabling the retail industry to build and deploy IoT solutions to transform the shopping experience and their business model.

What’s Next

With the digital signage market projected to grow a staggering 65% in 2015 alone, it’s more than likely that intelligent signage and audience measurement systems will be arriving soon at a business near you. Overall intelligent signage could lead not only to greater efficiencies in the retail sector, but also far more interesting in-store experiences for shoppers.

Where Can I Test Drive One of These Systems?

The Inception system will be demonstrated at the upcoming Microsoft Build (April 28 – May 1) and Microsoft Ignite (May 4-8) conferences.

An Interview with Atsushi Kitazawa of NEC Japan, the “Father” of IERS

Everything you wanted to know about IERS, from its position in the world of next-generation databases to its design goals, architecture, and prominent use cases.

I recently got the chance to talk to Atsushi Kitazawa, chief engineer at NEC Corporation, about the company’s new InfoFrame Elastic Relational Store (IERS) database.    I enjoyed the discussion with Kitazawa-san immensely – he has an ability to seamlessly flow from a deep technical point to a higher-level business point that made our talk especially informative.

Matt Sarrel (MS): Where did the idea for IERS come from?

Atsushi Kitazawa (AK): We decided to build IERS on top of NEC’s micro-sharding technology in 2011. The reason is that all of the cloud players see scalability and consistency as major features and we wanted to build a product with both. Google published the Google File System implementation in 2003 and then they published Bigtable (KVS) in 2006. Amazon also published Amazon Dynamo (KVS) in 2007. NEC published our CloudDB vision paper in 2009, which helped us to establish the architecture of a key value store under the database umbrella. In 2011, Facebook published improved performance of Apache Hadoop and Google published the method of transaction processing on top of BigTable called Megastore BigTable. Those players looked at scalability and then consistency. By 2011 they had both.

A KVS is well-suited for building a scalable system. The performance has to be predictable under increasing and changing workloads. At the beginning, all the cloud players were using replication in order to increase performance, but they hit some walls because of the unpredictability of caching. You cannot cache everything. So they moved to a caching and sharding architecture so you can partition data to multiple servers in order to increase caching in memory. And then the problem here is that it is not so easy to shard a database in a consistent manner. This is the problem of deep partitioning. You can see the partitioning or sharding in the beginning—it is not so difficult–but dynamic partitioning and sharding is very difficult. The end goal of many projects was to provide a distributed KVS. The requirement of a KVS is predictability of performance under whatever workload we have.

MS:  Why is a KVS is better? 

AK: The most important thing about a KVS is that we can move part of the data from one node to another in order to balance performance. Typically, the implementation of a KVS relies on small partitions that can be moved between nodes. This is very difficult when you consider all of the nodes included in a relational database or any database for that matter. In a KVS, everything is built on the key value so we can track where data resides.

140624-fig-1

Going back to the evolution of database products, Facebook developed Cassandra on its own because it needed it. It had to move part of the application from Cassandra to HBase but had to improve HBase first. Facebook reported in a paper the reason why it had to use HBase is that it need consistency in order to implement its messaging application. The messaging application, made available in 2011, enabled users to manage a single inbox for various messages including chats and Tweets. This totals 15 billion messages from 350 million members every month and 120 billion chats between 300 million members. Then Facebook wanted to add consistency on top of performance because of the increased number of messages delivered.

On the other hand, Google added a transactional layer on top of its BigTable KVS. It did this for the app engine that is used by many users concurrently. The transactional layer allowed users to write their application code.  Google also developed Caffeine for near-real-time index processing and HRD (High Replication Datastore) for OLTP systems such as AppEngine to use.

Those are the trends that cloud players illustrated when NEC was deciding to enter this market. At NEC we developed our own proprietary database for mainframe moret han 30 years ago. Incidentally, I was on that team. We didn’t extend our reach to Unix or Windows so we didn’t have a database product for those platforms. In 2005, we decided to develop our own in-memory database and made it available in Japan. This is TAM or transactional in-memory database. We added the ability to process more queries by adding a columnar database called DataBooster in 2007. Now we have in-memory databases for transactions and queries. In 2010, we successfully released and deployed the in-memory database for a large Japanese customer. As our North America research team released the CloudDB paper, we merged the technologies together to become IERS.

We felt that if we could develop everything on top of a KVS, then it would be scalable. That is a core concept of IERS.

MS:  What were the design goals of IERS?  Could you describe how those goals are met by the system’s architecture?

AK: Regarding our architecture, the transaction nodes implement intelligent logs with in-memory database to facilitate transaction processing. The difference between IERS and most databases is that IERS is a log system machine. IERS does not have any cache (read, dirty, write) and this means we don’t have to synchronize cache in the usual manner. We just record all the changes to the transactional server in time order fashion and then synchronize the changes in batches to other data pods over IERS, which are database servers. The result is that the KVS only maintains committed changes.

140624-fig-2

We do have a cache, but it is a read-only cache, not the typical database cache. The only data the cache maintains is for reads from the query server. We do not need to be concerned with cache coherency. The transaction server itself is an in-memory database. We record every change on the transaction server and we replicate across at least three nodes. The major difference between IERS and other databases is the method of data propagation. Our technology allows the query server, accessible via SQL, to see a consistent view even though we have separate read and write cache. If you do not care much about consistency, then you can rely on the storage server’s cache. The storage server consists of the data previously transferred from the transaction server. If you consider the consistency between each record or each table, then you should read from the transaction server so that we maintain the entire consistency of the transaction.

The important point in terms of scalability is that both the KVS (storage) server and the transaction work as if they are KVS storage so we can maintain scalability as if the entire database is a KVS even though we have a transactional logging layer.

From a business point of view, there are users who are using a KVS such as Cassandra, which does not support consistency in a transactional manner. We want to see those users to extend their databases by adding another application. If they want a KVS that supports consistent transactions then we can help them. On the other hand, in Japan we see that some of our customers are trying to move their existing applications from RDBMS to a more scalable environment because of a rapid increase in their incoming traffic. In that case, they have their own SQL applications. Rewriting SQL for a KVS is very difficult if it doesn’t support SQL. So we added a SQL layer that allows users to easily migrate existing applications from RDBMS to KVS.

MS: Is there a part of IERS’ functionality or architecture that makes it unique?

AK:  From a customer point of view the difference is that IERS provides complete scalability and consistency. The key is the extent that we support the consistency and SQL to make it easier for customers to run their applications. We added a productivity layer on top of a pure scalable database. We can continue to improve the productivity layer. Typically, people have to compromise productivity to get scalability. Simply pursuing scalability isn’t so difficult. Application database vendors focus on the productivity layer. Then they add scalability. Our direction is different. We first look at scalability. We built a completely scalable database. Then we added the productivity layer – security support, transactional support – without compromising scalability.

MS: What types of projects is IERS well-suited for?

AK: Messaging is one good application. If you want to store each message in transaction fashion (track if it goes out, if it’s read, responded to, etc.) and require scalability, then this is a good application for IERS.

Another case is M2M because it requires scalability and there is usually a dramatic increase over time of the number of devices connected. The customer also has a requirement to maintain each device in transaction fashion. Each device has its own history that must be maintained in a consistent manner.

To learn more about NEC’s IER’S solution visit:  http://goo.gl/TnFkbR

Matt Sarrel *Matt Sarrel is a leading tech analyst and writer providing guest content for NEC.

NEC teams with Microsoft for flexible, open, standards-based SDN for the cloud

In case you missed it, NEC announced on October 7 the general availability of their integrated compute and network orchestration solution for Microsoft Hyper-V and System Center Virtual Machine Manager customers. The streamlined infrastructure solution, with a REST-based northbound interface to applications from the ProgrammableFlow Software-defined Network and the integration with Virtual Machine Manager, enables application-aware networking and unprecedented levels of control and flexibility.

Mike Schutz, General Manager of Microsoft Product Marketing for the Server and Tools Group, talks about the collaboration between these strategic partners and the benefits customers can expect in a new video entitled NEC and Microsoft: Delivering Open, Standards-based SDN for Cloud. NEC is looking forward to demonstrating the technology at Microsoft TechEd May 12-15 in Houston, TX.

As a re-cap, here are key benefits customers can expect from this award-winning SDN solution:

  • Radically speeds network provisioning and configuration
  • Control data center traffic flow on a granular level, including dynamic, supplication-centric security policy for reduced OpEx and CapEx
  • Apply business policy consistently and automatically across the compute and network infrastructure for greater responsiveness to the business
  • Increase network visibility for greater manageability and accelerated troubleshooting
  • Enable isolated, secure multi-tenant networks for private clouds to meet compliance and service level goals
  • Improve throughput with the server and across the network fabric
  • Reduce power and footprint over traditional networks, with savings up to 80%
  • Scales to thousands of virtual networks, and hundreds of thousands of VMs
  • Provide an open, standards-based architecture for investment protection
  • Add, move or change virtual and physical networks in seconds from Microsoft System Center Virtual Machine Manager, shaving days or weeks from traditional deployment models

For more information, go to necam.com/sdn, or customers can contact either NEC or their Microsoft reseller for further information.

NEC to demonstrate integrated server and network virtualization orchestration at TechEd

After delivering the first Windows 2012 extensible vSwitch for Windows Server 2012 Hyper-V, the ProgrammableFlow PF1000, NEC is continuing to collaborate with Microsoft for the benefit of Windows Server 2012 customers. For the first time at TechEd, beginning Sunday, June 2, in New Orleans, attendees will view a demonstration of a completely integrated NEC Software-Defined Network with Microsoft’s System Center Virtual Machine Manager, providing a single point of orchestration for both virtualized server and network environments.

Microsoft’s System Center Virtual Machine Manager with NEC’s OpenFlow-based ProgrammableFlow solution

Microsoft’s System Center Virtual Machine Manager with NEC’s OpenFlow-based ProgrammableFlow solution provides streamlined network deployment and configuration, dramatically decreases operational costs and extends flexibility, control and automation across the network.

NEC will be leveraging its award-winning ProgrammableFlow® networking suite, including the PF6800 SDN OpenFlow-based controller, and the new PF1000 vSwitch, to enable complete network virtualization and automated, dynamic network management.

NEC’s ProgrammableFlow networking suite adapts to changing workload needs by abstracting from the physical network, controlling data center traffic flows, and enabling integrated policies that span both the physical and virtual networks. The Virtual Tenant Network (VTN) function enables ProgrammableFlow customers to easily configure and manage isolated and secure virtual networks as required by many use cases.

Integrated SDN solution promises flexibility, control, automation and reduced OpEx
The integrated NEC ProgrammableFlow and Microsoft System Center 2012 Virtual Machine Manager solution promises benefits in four key areas: increased flexibility, control, automation and operational expense savings. While the integration with System Center Virtual Machine Manager is new, ProgrammableFlow is currently in production in multiple global and North American installations with documented results.

From a flexibility perspective, the virtual network abstraction provides a new level of flexibility similar to what server virtualization delivered. In the past, physical networks could not be easily redeployed or changed. Complete network virtualization allows for virtual networks that can be easily modified and moved. This may be the use case that many seek when looking for Software-Defined Networking (SDN) solutions. Flexibility also results from ProgrammableFlow’s multi-tenant networking. The ability to isolate secure virtual networks to match business requirements is a new and exciting product of ProgrammableFlow’s Virtual Tenant Networking.

New levels of control are enabled with the ability to set policies across the network – all from a centralized point. The ProgrammableFlow networking suite further allows for bandwidth control, and dynamic traffic control with network-wide Quality of Service (QoS) that allows customers to prioritize traffic across the network and mitigate or eliminate bottlenecks on key workloads or designated network flows.

ProgrammableFlow network suite integrated with System Center Virtual Machine Manager delivers unmatched network automation. It enables customers to move VMs between physical hosts with business policy moving with the VM and routing of packets automatically updated – transparent to the administrator and to the user. As a further aid to managers moving virtual machines, end-to-end flows can now be viewed on the management console, providing new insight on network status. Dynamic, policy-driven network configuration and reconfiguration becomes increasingly important when considering the imperative of business agility, and System Center Virtual Machine Manager ties together these benefits when considering the control across hundreds or even thousands of VMs and their supporting networks.

Reducing complexity and eliminating many manual tasks drive operational savings from the integration of System Center Virtual Machine Manager and ProgrammableFlow. Kanazawa University Hospital reported the deployment of software defined networking will produce operational cost savings of as much as 80%. Genesis Hosting Services is on record with a significant drop in network programming. Nippon Express looked at energy savings exceeding 50% over conventional networks, and last week SDN Central reported that NTT expected more than 50% OpEx savings from ProgrammableFlow SDN by the end of 2015.

For more information on Software-defined Networking and NEC’s ProgrammableFlow, please visit our web site at necam.com/sdn. Also, register for TechEd or, to follow the proceedings, go to northamerica.msteched.com.