Category Archives:

Leawood mom gets her own DIY Network show, co-starring her dad – Kansas City Star

Next year this time, Kansas City may have its own celebrity fixer upper.

DIY Network has signed Tamara Day, a Leawood mother of four and a home rehabber, to star in “Bargain Mansions.” Twelve 30-minute episodes will begin airing in October.

Her father, Ward Schraeder, who lives in Salina and is CEO and a principal partner at Medical Development Management in Wichita, will co-star.

Day and Schraeder were in the middle of pre-production for the show when I reached them by phone. Both sounded happy and enthusiastic about what lies ahead, but they were realistic about the grueling schedule it will bring. Day expects to be onsite, renovating homes seven days a week for the next nine months. Video crews will be taping two days a week.

“We’re just trying to get our life together right now,” Day said, chuckling. “I’ve taken the last two weeks off to spend time with my kids and husband, and I went to see family and enjoy peace and quiet before the storm hits.”

Reality Road Entertainment, a video production and casting company in the Crossroads Arts District, will produce the show with its Los Angeles partner, Conveyor Media. Shooting is scheduled to start at the end of January. Matt Antrim, co-owner of Reality Road, will be executive producer and creator of the show.

Day and her father will renovate six homes over the first season, with each episode featuring the renovation of two rooms.

“Tamara and her husband are actually buying all the houses,” Antrim said. “They will redo them from top to bottom, but the show will only feature four rooms (per house). You’ll always see the kitchen, the master bedroom and bath and two other rooms being redone. But all will be for sale.”

The first two episodes will focus on a 4,000-square-foot Hyde Park bungalow built in 1906 that was gutted by its previous owner.

“It’s a great big old house that an investor got in over his head on and we lucked out,” Day said. “I try really hard to save as many historic aspects as I can. Unfortunately there’s very little salvageable in this house. It’s so sad. It kills me. They did save doors and banisters. And I think for the most part we’ll be able to salvage the floors.”

She and Schraeder had just finished consulting with Davis Paint about removing paint from the home’s limestone exterior.

DIY Network has also invited Day to renovate the kitchen and living room of a Vermont house later this month for its show “Ultimate Retreat” (also known as “Blog Cabin”).

Day came to the attention of Reality Road producers when they were talking to her brother, Caleb Schraeder, a woodworker, about a possible show. He wasn’t a good fit, but he suggested his sister, who had remodeled a dozen dilapidated homes with her husband, Bill Day, a financial planner.

Day does a lot of the work herself, including designing floor plans, knocking down walls, painting walls, and stripping, rebuilding and refinishing floors and woodwork.

Antrim shot a three-minute sizzle reel of Day and took it to DIY Network. The network gave him money to shoot an 8-minute super sizzle, so it could see if it wanted two pilot episodes.

That’s when her dad inadvertently worked his way onto the show. Schraeder kept showing up to see what she was doing, and DIY Network loved him. Antrim said he’s like John Wayne.

His catchphrase, according to Day, is, “I’m glad I thought of that,” which he usually says when he initially disagrees with one of her ideas that turns out to be a good one.

Day put Schraeder on the phone while we were talking. He’s not sure what to make of starring in a TV show.

“Maybe when it becomes real, and it’s actually on TV on a regular basis, and I see how I like it I’ll be able to tell you,” he said. “Right now it’s fun. I get to spend time with Tamara, and I get to see how a TV show is made.”

In the course of his own career, Schraeder said, he has employed several thousand people and built hospitals and health care centers, but he has still learned a thing or two from his daughter.

“She has things I don’t have,” he said. “She has personality and talent and decorating skills … I’m not surprised at anything Tamara does. She’s exceptionally talented and has never shied away from a challenge or opportunity.”

The pilots, called “Little Money Mansions,” aired several times over the summer and were well received by viewers, including a focus group of 2,000 people. The only thing they didn’t like, as far as Day can tell, is the name of the show.

“And they decided on 12 episodes rather than six episodes or four episodes, and the fact that they asked me to do the Vermont project shows me they’re really behind me and see something. They get pitched a lot of shows every day. The odds of us getting to this point are amazing.”

Reality Road also recently secured financing from HGTV to shoot a super sizzle reel of Cody Brown, an artist/rehabber working out of a studio in the West Bottoms.

“He does everything,” Antrim said. “Plumbing, artwork, electrical, furniture, he can make your cabinets, he can literally do everything. He does homes and commercial space as well.”

Look for a profile of Brown in the Jan. 15 Spirit section.

Having a local company producing these shows is a huge boon for Kansas City, said Stephane Scupham, film and new media manager for Visit KC.

“Seeing Kansas City on a national level so that more of the general public gets to know us and proves we’re a great destination to shoot in,” she said. “I couldn’t be happier with Matt and Reality Road. They are doing really well right now. It’s exciting.”

DIY Venue Queen Ave Looks Forward to Reopening – Nashville Scene

New team makes progress on codes and permits compliance

Meth Dad at Queen Ave, 4/28/2015Meth Dad at Queen Ave, 4/28/2015Photo: Stephen TrageserHere’s some hopeful news in the ongoing story of DIY spaces in Music City: The proprietors of East Side space Queen Ave (a venue which The Black Keys’ Patrick Carney took time to crush on in our 2016 Year in Music issue) have released a statement on their progress in getting up to code, and things look promising.

In the month since Nashville’s DIY spots went into a holding pattern in the wake of the tragic fire at Oakland, Calif.’s Ghost Ship, spaces like Queen Ave, Drkmttr and The Glass Menage, critical to the health of Music City’s music scene, have looked for ways to work with the city so they can continue hosting shows.

Two weeks before the fire, Queen Ave had suspended their events calendar while a new team — Rachel Warrick, Molly Hornbuckle and Arthur Leagoprepared to take the reins from founders Tyler Walker, Taylor Jensen and Mike Kluge. Today, Warrick, Hornbuckle and Leago tell the Scene that their efforts to obtain a Use and Occupancy permit (which Metro Codes requires all venues to have) have produced some positive results.

“In the wake of the Oakland tragedy, we’ve been working closely with the Music City Music Council to get Queen Ave up to standards with codes and permitting,” the Queen Ave team tells the Scene in an email. “Our team has been working very hard to reopen the space and it has been so encouraging to have such a supportive community around us.

“Our next steps [toward getting the permit] involve getting a Life & Safety Analysis by a certified Tennessee architect, a Fire Inspection with the Nashville Fire Department, and more likely that not, bringing the space up to ADA standards,” the statement continues. “While working on these steps, we will be polishing up Queen Ave with newly painted floors and walls, and building a stage for performers, among other improvements to create a new and improved experience. We’re hesitant to announce a grand opening date at this point in time — becoming ADA compliant might throw another hitch our way — but we’re aiming for some time in March.”

Anyone who’d like to offer advice or volunteer time to help with construction and painting projects can reach out to the organizers at A new website is set to launch this weekend at, where they’ll post updates about the ongoing process.

One bummer note: Queen Ave won’t be ready in time to host a benefit for the Oasis Center featuring Idle Bloom, Western Medication, Music Band and other top local rock bands, scheduled for Friday, Jan. 13. However, the show will go on at The East Room — check the Facebook event page for details.

Build a DIY garden you can bring on the road – Popular Science

There’s nothing quite like putting down roots and tending a garden. But what happens if you don’t have a backyard? Or you’re suddenly uprooted? Or you decide to go on a road trip and can’t get anyone to watch your plants?

Just whip up a portable container garden. Sure, it’s not the same thing as a plot of land, but it’s easy to build and you can move it on demand. In this design, created by researchers at the University of Maryland Extension, a water reservoir helps keep the plants healthy and hydrated—even if you forget to water them while you’re traveling.


  • Time: 2 hours

  • Cost: $50

  • Difficulty: Easy

Tools & Materials

  • Two five-gallon food-grade buckets (A)
  • Drill with ½-inch hole saw and 1/16-inch bit
  • Sandpaper
  • Cloth or nylon strips (B)
  • PVC pipe, ¾-to 1-inch wide, 18 inches long (C)
  • Saw
  • 16 quarts of soil (D)
  • Seedlings or plants (E)


  1. Trace the mouth of the pipe on the bottom of one bucket, less than an inch from the edge. Drill and sand out that circle.
  2. Drill five additional 1/2-inch holes: one in the center and four evenly spaced around the edge. For drainage, add eight smaller holes spread around the bottom.
  3. Tie a knot at one end of each cloth strip. Feed the strips through the 1/2-inch holes so the knots sit on the inside and the cloth hangs down.
  4. Put the holey bucket into the second one, mark a spot on the outer bucket where the holey base hits, and separate them. Drill a small overflow hole at the mark and restack the buckets.
  5. Cut one end of the pipe at a 45-degree angle. Push that end through the pipe-size hole in the inner bucket.
  6. Fill the inner bucket with damp soil and plants, and keep them hydrated by pouring water down the pipe. Stop adding water when the overflow hole starts to leak.

This article was originally published in the January/February 2017 issue of Popular Science, under the title “Road-ready garden.”

Global DIY Tools Market is Forecast to Grow to USD 13.9 Billion by 2021: Technavio – Business Wire (press release)

LONDON–()–According to the latest market study released by Technavio, the global DIY tools market is expected to grow at a CAGR of more than 3% during the forecast period.

This research report titled ‘Global DIY Tools Market 2017-2021’ provides an in-depth analysis of the market in terms of revenue and emerging market trends. This market research report also includes up to date analysis and forecasts for various market segments and all geographical regions.

Growing home improvement, combined with a high number of renovation projects and confidence in home renovation industry is one of the major driving factors of the global DIY tools market. Since this market is highly fragmented, vendors have plenty of prospects to explore opportunities. Increasing job and wage growth, rising home sales and price appreciation are driving the demand for major DIY retailers.

Other key driving factors for the global DIY tools market are the increasing number of DIY stores with unique offerings, growth in new building construction and personal consumption expenditure aiding market growth, and popularity of DIY activities that is accelerating sales. These factors are together expected to drive the DIY tools market to be worth USD 13.9 billion by 2021.

Request a sample report:

Technavio’s sample reports are free of charge and contain multiple sections of the report including the market size and forecast, drivers, challenges, trends, and more.

Based on product, the report categorizes the global DIY tools market into the following segments:

  • Power tools
  • Hand tools
  • Decorating tools

Power tools

Power tools offer more ease and convenience to DIYers for activities like screw driving, drilling, chiseling, routing, sanding, buffing, polishing, and leveling. Some of the commonly used power tools among DIYers are drilling machines, circular saws jack hammers, nail guns, wall chasers, and others. Key vendors in this market segment are Stanley Black and Decker, Bosch, and Techtronic Limited. This market segment is mainly driven by professionals seeking to streamline their DIY projects,” says Poonam Saini, one of the lead analysts at Technavio for retail goods and services research.

During the forecast period, an increase in the demand for more power and cordless tools, and technologically advanced tools made specifically for DIYers will contribute to the revenue of the power tools market. There is also a growing demand for smaller and more ergonomic tools in the market that are more powerful than traditional power tools. Electric power tools dominate the market, followed by pneumatic power tools, and engine-driven power tools. This segment is forecast to continue its market dominance through the forecast period.

Hand tools

Hand tools do not require any power source and involve manual labor to function. Some of the most popularly used hand tools are hammers, garden forks, rakes, spanners, screwdrivers, pliers, and wrenches. The major driver for this segment is expected to be the demand generated from the growing home improvement market. Some of the key vendors catering to the customers in this segment are Actuant, Stanley Black and Decker, Bosch, and Techtronic.

Region wise, North America and Western Europe are the market leaders, contributing to most of the market revenue. In APAC, India and China bring in the highest demand for DIY tools like rulers, squares, and dividers. Globally, general purpose hand tools like hammers and pliers are the most demanded products, followed by metal cutting tools like hacksaws and bolt cutters. As these products are durable, any new demand for products from this segment will originate from new innovative products that have ergonomic designs and have multiple utilities.

Decorating tools

Decorating tools form a niche, but important segment of the global DIY tools market. It includes products like wallpaper scrapers and paintbrushes. In Europe, people view painting their homes and decorating it as a bonding activity, which creates a high demand for products from this segment. Since these tools are low-cost and require frequent replacements, there is a continuous demand for them,” says Poonam.

The increasing home ownerships worldwide will have a positive impact on the market, as it is believed that homeowners are more likely to spend on home improvement and remodeling than people living in rented houses. This creates a direct link between increasing home ownerships and growth in the decorating tools segment. Additionally, the increasing disposable income across various demographics will also play a major role in driving this market segment.

The top vendors highlighted by Technavio’s research analysts in this report are:

  • Makita
  • Robert Bosch
  • Stanley Black & Decker
  • Techtronic

Browse Related Reports:

Become a Technavio Insights member and access all three of these reports for a fraction of their original cost. As a Technavio Insights member, you will have immediate access to new reports as they’re published in addition to all 6,000+ existing reports covering segments like apparel and textile, cosmetics and toiletry, and pet supplies. This subscription nets you thousands in savings, while staying connected to Technavio’s constant transforming research library, helping you make informed business decisions more efficiently.

About Technavio

Technavio is a leading global technology research and advisory company. The company develops over 2000 pieces of research every year, covering more than 500 technologies across 80 countries. Technavio has about 300 analysts globally who specialize in customized consulting and business research assignments across the latest leading edge technologies.

Technavio analysts employ primary as well as secondary research techniques to ascertain the size and vendor landscape in a range of markets. Analysts obtain information using a combination of bottom-up and top-down approaches, besides using in-house market modeling tools and proprietary databases. They corroborate this data with the data obtained from various market participants and stakeholders across the value chain, including vendors, service providers, distributors, re-sellers, and end-users.

If you are interested in more information, please contact our media team at

RiNo Art District Pushes City to Stop Surprise DIY Venue Visits, Work With Artists – Westword

Denver police oversee the artists ousted from Rhinoceropolis.

Denver police oversee the artists ousted from Rhinoceropolis.

Lindsey Bartlett

Twelve days after the city shut down legendary DIY venues Rhinoceropolis and Glob, citing unsafe conditions and ousting eleven artists who’d been living there, Denver police and fire officials returned to RiNo for another sudden inspection on December 20. Their unexpected appearance prompted an angry e-mail to the city from Jamie Licko, president of the River North Art District, who’d met with the displaced artists and others in the worried DIY community just a few days earlier to come up with a game plan.
“Another surprise check is currently occurring at Juice Church, another RiNo DIY space. Police and fire are currently there. We are waiting to see what happens next,” Licko wrote. “Last week, RiNo sent on our list of requests, the most important of which was an amnesty on checks until we all could have a conversation about how to handle these spaces, and how to ensure we aren’t displacing more people right now. We were told we would be getting correspondence back from the city on this matter last Thursday. We never received anything….”

But now, with this surprise check, the city’s message seemed all too clear.

“I’d like to request a meeting with RiNo, mayor’s office, fire, police and Arts and Venues ASAP,” Licko’s emergency e-mail continued. “We want as much as the City does to get these places up to code, but this is unacceptable.”

Licko got that meeting. Within minutes of her irate e-mail hitting in-boxes around the city, she’d received calls from Denver Fire Chief Eric Tade as well as Brad Buchanan, head of the Department of Community Planning and Development. And at a quickly convened gathering on December 22, “everyone was at the table,” she says, including representatives of the mayor’s office and officials from Arts & Venues and the city’s planning, police and fire departments, along with RiNo Art District boardmembers, property owners and actual artists — some of whom had been evicted from Rhinoceropolis and Glob after the first round of surprise visits on December 8. Since those visits — prompted, according to fire-department officials, by tips that they’re required to investigate rather than any publicity stemming from the Oakland Ghost Ship tragedy (though that publicity could have prompted those tips, whether from opportunistic trouble-makers or concerned neighbors) — members of Denver’s DIY community have been reluctant to talk about their spaces, for fear of attracting more attention and yet more snitching to the city.

Even so, some tipster inspired that surprise check on Juice Church, a DIY music venue at 3400 Lawrence Street. But while Juice Church’s Facebook page has been taken down, the space is still open: This time, RiNo representatives were able to secure a promise from the city that Juice Church could stay open for thirty days while any necessary repairs were made.

And the RiNo Art District isn’t stopping there. It’s setting up a fund that other DIY venues will be able to tap into for repair work that will keep them operational; members of the community, including developers who’ve made RiNo one of the hottest spots in town, have stepped up to offer financial support, Licko says; many responded to any earlier missive she’d sent on behalf of RiNo after the abrupt closures of RiNo and Globe. (Westword gave a last-minute 2016 MasterMind award to the ousted artists, and will deliver the $2,000 honorarium this week.)

Fantasia 2016 at Rhinoceropolis last fall.

Fantasia 2016 at Rhinoceropolis last fall.

Kenneth Hamblin III

And there are plenty of other businesses that can be tapped for donations, including marijuana companies that have priced old warehouses out of reach for the artists who used to take on the spaces when no one else wanted them. “This would be money that would essentially bridge the gap between where places are now and what they need to be up to code,” explains Licko. But the plan isn’t just to make money available for emergency fixes; she wants to have RiNo representatives and maybe even lawyers go along on city inspections of DIY locations, to make sure that communications are clear and that the venue owners and operators have a chance to make needed repairs without shutting down.

Longer-term, the RiNo Art District might push for zoning changes — not just in RiNo, but across the city — that would allow more than four unrelated people to live together and get around the city’s sky-high rents (think old-school boardinghouses), and also ask for more simplified explanations of Denver’s zoning codes.

Andrea Burns, communications director for the planning department (which sent not just Buchanan, but all of its chief inspectors to the December 22 meeting), says the department is already looking into that since “our code books are hundreds of pages long.” The city’s response to the situation is “very much a work in progress, and there are certain aspects of safety we’re not willing to compromise on, for obvious reasons,” she adds. “Safety is a priority for everybody, but people being displaced should be a last resort.”

At this point, RiNo, which got its start as an art district over a decade ago but recently voted in a business improvement district, is ready to try anything to keep artists in the area. After all, the River North Art District is “where art is made,” according to its slogan. “I told the DIY community, ‘Look, if there was a solution to this, someone would have already done it,’” Licko says. “We need to ensure for the long haul that there is a space and a place for everybody to keep creating.”

The renovated McNichols Building is no DIY arts space.EXPAND

The renovated McNichols Building is no DIY arts space.

Arts & Venues

But first, artists need to be assured that Denver, which has a reputation as one of the most creative places in the country, appreciates the people who earned it that designation. Here’s a start: At 5:30 p.m. on Wednesday, January 18, Denver Arts & Venues, the Denver Commission on Cultural Affairs and the planning and fire departments will host Safe Creative Spaces & Artspace Collaboration at the McNichols Building (itself renovated as an arts/events space, but hardly DIY), “a forum for proactively getting information to potential tenants and/or building owners regarding building and fire safety, and how to make creative space safe for occupants.”

Making sure no more surprise inspections occur would be a good start: The city’s announcement of the forum was sent out the day before the visit to Juice Church and Licko’s angry e-mail.

That was just an unhappy coincidence, says Ginger White-Brunetti, deputy director of Arts & Venues, who explains that in the wake of the Rhinoceropolis/Glob closures, the city just wanted to “pick a date and pull something together as soon as we could.” The subsequent action at Juice Church and the December 22 meeting just emphasized how important such a forum could be; she says that organizers are still taking suggestions for what the event should cover. (You can put yours in the comments section.)

“From what I’ve heard from the DIY community, everyone understands there are legitimate concerns about safety,” White-Brunetti adds. “The holistic question of the place of artists in a city has been a holistic question that Arts & Venues has been looking at for some time.”

And if Denver is truly a city that appreciates its artists, now is the time for some real answers.

Retro-Style DIY Polygraph: Believe It Or Not – Hackaday

A polygraph is commonly known as a lie detector but it’s really just a machine with a number of sensors that measure things like heart rate, breathing rate, galvanic skin response and blood pressure while you’re being asked questions. Sessions can be three hours long and the results are examined by a trained polygraph examiner who decides if a measured reaction is due to deception or something else entirely. Modern polygraphs feed data into a computer which analyses the data in real-time.

Cornell University students [Joyce Cao] and [Daria Efimov] decided to try their hand at a more old fashioned polygraph that measures heart and breathing rates and charts the resulting traces on a moving strip of paper as well as a color TFT display. They had planned on measuring perspiration too but didn’t have time. To measure heart rate, electrodes were attached to the test subject’s wrists. To measure breathing they connected a stretch sensor in the form of a conductive rubber cord around three inches long to a shoelace and wrapped this around the test subject’s abdomen.

While the output doesn’t go into a computer for mathematical analysis, it does go to a PIC32 for processing and for controlling the servos for drawing the traces on the paper as well as displaying on the TFT. The circuit between the breathing sensor and the PIC32 is fairly simple, but the output of the heart rate electrodes needed amplification. For that they came up with a circuit based off another project that had a differential amplifier and two op-amps for filtering.

Since parts of the circuit are attached to the body they made some effort to prevent any chance of electrocution. They used 12 volts, did not connect the test subject to power supply chassis ground, and tested the heart rate electrodes with a function generator first. They also included DC isolation circuitry in the form of some resistors and capacitors between the heart rate electrodes and the amplifier circuit. You can see these circuits, as well as a demonstration in the video below. The heart rate output looks a little erratic, no surprise given that the body produces a lot of noise, but the breathing trace looks very clear.

[Joyce] and [Daria] were mentored in this project by Cornell’s [Bruce Land] whose students have been so prolific over the years that he has his own tag here on Hackaday. Pulling out a random sample of their projects check out this real-time video anonymizer, these FPGAs that keep track of a ping-pong game, or you can become a student yourself and take one of his courses.

Does this DIY pet paw balm for winter actually work? –

With subzero temperatures, salt-laden sidewalks, and generally wintry conditions, your pet’s paws can really take a beating! Can you imagine going outside without some sort of shoe on? 

Salt used to melt snow on sidewalks can be harmful to your pet’s paw pads, and lots of snow and ice can make their paws cold and uncomfortable. 

We decided to test out a popular online recipe for a paw balm said to help protect your furry friends from the cold and snow while outside. It can also be used to hydrate dry noses.

We spoke with Dr. Kevin Fitzgerald at VCA Alameda East Veterinary Hospital about the safety of the salve. He says its pet-safe, however your dog or cat may try licking it off their paw pads. Best to keep an eye on them right after applying.

Reporter Noel Brennan has one of the cutest dogs we know, so we enlisted the help of Cooper to test out the recipe. 

Stay tuned to find out how it works! Noel and Cooper’s story will air Thursday at 4PM on 9NEWS. 


  • 2 oz. (approx. 2 tbsp.) olive, sunflower, or sweet almond oil
  • 2 oz. (approx. 2 tbsp.) coconut oil 
  • 1 oz. (approx. 1 tbsp.) shea butter 
  • 4 tsp. beeswax 


In a small pot or double boiler over low heat melt the oils, shea butter, and beeswax. Stir continuously until all is melted and well blended.

Carefully pour the mixture into lip balm tubes and/or tins.

Let them cool on the counter until hard. Cap and label. Keep away from extreme heat.

Apply the balm as a preventive treatment or to help soften dry paw pads or noses. Use within one to two years.

Recipe adapted from Frugally Sustainable.

Other ways to protect your pet’s paws:

Booties are the best option – although we know some dogs just can’t quite get the hang of them! 

If you choose not to use booties on your dog, be sure to wipe his or her feet before your pup comes inside to ensure that de-icing products (like salt) have been removed along with any ice balls that might have formed.

The American Kennel Club also recommends keeping the paw hair short with frequent trimming. It will help prevent snow and ice from forming balls that can lead to chaffing, chapping, and even cuts. Trim the hairs around the outside of your dog’s paw so that it doesn’t extend past the boundaries of the paw. 

(© 2017 KUSA)

WBAY – Doctors warn DIY “slime” can be toxic – January 4, 2017 – WBAY

NEW HAVEN, Conn. (WTNH) — Hundreds of videos are popping up on YouTube with the newest craze kids showing how to make slime or Gak with Borax, which is used as a cleaning product or pesticide.

Dr. Richard Uluski said the do-it-yourself toy can be unsafe for children.

“Something that’s a chemical should not be used as a toy,” said Uluski.

There are dozens of links with DIY slime using Borax.

“It’s very popular because it makes the stuff that you see in the stores and you can dye it all different colors,” said Uluski.

Dr. Uluski is warning parents about the health concerns.

“It is just like putting lead in paint and putting that on a toy and kids don’t want to put that in their mouth so it’s the same aspect here,” said Uluski.

Dr. Uluski said if a child ingests slime made with Borax it could be toxic and could even cause seizures.

“From a medical standpoint too much of Borax can lead to medical problems including things like seizures,” said Uluski.

He said your child won’t have an immediate effect.

“There are not going to be burns in the mouth or blisters in the mouth but a lot of kids who ingest a lot of it can start to throw up and have stomach discomfort and pain,” said Uluski.

If your child does ingest Borax, take them to the hospital and call poison control: (800) 222-1222

Lego Robots At CES 2017: DIY Robotics With Lego Bricks Now Possible With Lego Boost That Turns Lego Toys Into … – The Inquisitr

Lego unveiled its new building set at the ongoing CES 2017 in Las Vegas Nevada. However, this year, the bricks can be programed using “Lego Boost.” The programmable robotic kit that’s quite different from Lego Mindstorms neatly combines building blocks with sensors, motors, and app control to introduce young minds to the basics of programming and help them unleash their creativity in robotics.

Lego has significantly upped its appeal with Lego Boost, a robotic kit quite unlike Lego Mindstorms that the company has been selling for quite some time. Riding the wave of educational robots that can be coded with basic programming, Lego unveiled Lego Boost at CES 2017. Lego Boost cane reportedly utilize the huge piles of multi-colored and multi-sized bricks that are commonly found in every home.

Lego Boost allows kids to build a variety of robots that can respond to stimuli. The Boost Kit is centrally controlled by the Move Hub. The hub is essentially a special Lego brick with an inbuilt tilt sensor as well as a selection of connections for the included motors and visual/color sensor. As expected, the Move Hub can be controlled with a smartphone or tablet running the Boost app. Kids can communicate with the hub and determine how the main brick and the connected devices behave. The Lego Boost comes with three Boost bricks that do most of the robotic heavy lifting, including a tilt sensor, a color and distance sensor and a motor, reported CNet.

Lego Robots: DIY Robotics With Lego Bricks Now Possible With Lego Boost That Turns Lego Toys Into Programmable Robots[Image by Lego]

Lego Boost comes with instructions to build five different robotics projects, but given the versatility of the main brick as well as the connectors and ancillary bricks that Lego offers, kids can easily build, communicate with, and control quite a few robots. Expected to be launched in the second half of 2017, the $160 Lego Boost set replete with 843 pieces and a special playmat that the robots can move on, will allow kids aged 7 years and older to build and control “Vernie the Robot”, “Frankie the Cat”, the “Guitar 4000”, the “Multi-Tool Rover 4 (M.T.R.4)”, and the “Autobuilder”. Beyond these suggested creations, Lego claims existing Legos can also be used:

“A walking base for making animals like a dragon or a pony, a driving base for building vehicles like a dune buggy or rover, and an entrance base so that children can make their own castle, fort, or even a futuristic space station.”

Lego Boost was undeniably a long-overdue concept. Lego enthusiasts have long expected an inexpensive set of programmable bricks that could work well with existing Lego kits instead of buying special assembly kits and the robotics kit separately. Boost can reportedly turn a lot of pre-purchased Lego kits into motorized or motion-sensitive toys. The accompanying app can record voice effects, which means kids can bestow the power of speech (pre-recorded messages only) for their creations.

Lego Robots: DIY Robotics With Lego Bricks Now Possible With Lego Boost That Turns Lego Toys Into Programmable Robots[Image by Lego]

Boost creations might appear quite similar to Lego Mindstorms. However, the company has created the system with a younger audience as their focus. In other words, unlike the Mindstorms interface, the app offers a highly simplified interface for kids. However, despite the simplicity, Lego Boost also offers logic functions for use with input from the kit’s sensors and output from the motors and the connected mobile device’s speaker, reported PC Magazine. The Creative Canvas mode will certainly appeal to users beyond the target audience as it offers access to the slightly advanced programming tools outside of the five pre-made projects to build a little more complex robot.

Incidentally, Lego isn’t outmoding its Mindstorms offering. The Move Hub in Lego Boost is essentially a transmitter and receiver device, offloading all programming and processing to the connected tablet and app. Instead of storing and running code, the Move Hub relies on the smartphone app to trigger the motors. The Mindstorms, on the other hand, is a programmable microcomputer and is a more advanced version that can retain and execute code directly, without needing a connected device.

[Featured Image by Lego]

Pretty Fly for a DIY Guy | Hackaday – Hackaday

Milling machines can be pretty intimidating beasts to work with, what with the power to cut metal and all. Mount a fly cutter in the mill and it seems like the risk factor goes up exponentially. The off-balance cutting edge whirling around seemingly out of control, the long cutting strokes, the huge chips and the smoke – it can be scary stuff. Don’t worry, though – you’ll feel more in control with a shop-built fly cutter rather than a commercial tool.

Proving once again that the main reason to have a home machine shop is to make tools for the home machine shop, [This Old Tony] takes us through all the details of the build in the three-part video journey after the break. It’s only three parts because his mill released the Magic Smoke during filming – turned out to be a bad contactor coil – and because his legion of adoring fans begged for more information after the build was finished. But they’re short videos, and well worth watching if you want to pick up some neat tips, like how to face large stock at an angle, and how to deal with recovering that angle after the spindle dies mid-cut. The addendum has a lot of great tips on calculating the proper speed for a fly cutter, too, and alternatives to the fly cutter for facing large surfaces, like using a boring head.

[ThisOldTony] does make things other than tooling in his shop, but you’ll have to go to his channel to find them, because we haven’t covered too many of those projects here. We did cover his impressive CNC machine build, though. All [Tony]’s stuff is worth watching – plenty to learn.

Modernizing HIPAA: Cloud Computing and Mobile Devices | The … – The National Law Review

Saghi “Sage” Fattahian is an associate in Morgan Lewis’s Employee Benefits and Executive Compensation Practice. Ms. Fattahian focuses her practice on a variety of employee benefits matters, including the design and implementation of qualified plans, welfare plans, fringe benefits, and other compensation arrangements. She assists clients in developing compliance protocols on regulatory issues dealing with the Internal Revenue Code, ERISA, COBRA, and HIPAA.

10 trends that will influence cloud computing in 2017 – Information Age

For most organisations the question is no longer whether it is appropriate to adopt cloud, but when is the right time and what services to move.

Meanwhile, early adopters should be reviewing their portfolio to ensure they are getting best value and optimum service, as cloud providers are constantly developing and updating their offerings. These are the key cloud trends to look out for in 2017.

1. Enterprise cloud

At the moment, the term’ enterprise cloud’ is generally taken to mean virtualised in-house environments with an element of user self-service and reporting. Hyperconvergence is often described as enterprise cloud.

However, ‘true’ enterprise cloud should be a common suite of design, provisioning, management and reporting tools controlling hybrid clouds that allow each service to be hosted and controlled on the most appropriate platform. That’s irrespective of whether these are public, private, hybrid, community, hosted or any combination.

>See also: How cloud computing can transform the pharmaceutical industry

New developments such as Azure Stack, the recent VMware and AWS tie-up, and the increasing maturity of Openstack and its community ecosystem will start to deliver this in 2017.

2. Hyperconvergence

We can expect increasing hype around hyperconvergence in 2017, but complete solutions are still some distance away. Hyperconverged systems are useful building blocks to create base cloud infrastructure, but at the moment they are basically standard platforms for supporting virtualisation, and there is a big gap between what they offer as standard and what organisations need from cloud. They provide the first 20% of the necessary integration, but users will still need to do the remaining 80% themselves.

3. Cloud architecture

Architecting systems for cloud, or working out the optimal method for migrating existing services to cloud, demands different skills from core IT infrastructure design.

With public cloud services, organisations no longer have the ability to uniquely configure each element to their application or service, but have a standard set of building blocks that need to be integrated and cannot normally be changed.

It’s the difference between cooking for yourself from raw ingredients and ordering in a restaurant where the chef has set the menu and you choose the meal, associated ambience and service quality to suit your budget.

Expect to see organisations increasingly developing these architecture skills to achieve successful migrations.

4. Hybrid cloud management – the cloud service broker

To make hybrid cloud work, organisations need an audit function to ensure that the service is and remains fit for purpose, and independent service monitoring and management either in-house or contracted through an independent third party to ensure the provider actually provides what they are contracted to.

This is leading to the development of a new role: the cloud service broker, who will both define the services and then determine the most appropriate way to provide, manage and secure them.

Analyst firm 451 Research has highlighted this as a key trend in 2017. CIOs could allocate the role of cloud service broker to a member of their IT team or a third party can provide this service.

5. Managing multiple cloud providers

As organisations increasingly use multiple cloud providers, we are seeing the introduction of cloud management services that provide service integration, management and monitoring for all cloud services contracted by an organisation.

They offer major incident and problem management, with escalation to third parties if required, and may also include asset management of devices and infrastructure.

Simplistically cloud management is ‘lightweight’ SIAM (service integration and management), with the controls, processes and principles of the discipline but without the hefty price tag and long-term contractual commitments ‘full’ SIAM has historically involved.

6. Cloud monitoring as a service

As use of hybrid cloud grows, more organisations are turning to cloud monitoring as a service (CMaaS) to monitor performance across the multiple suppliers that will now be interdependent, and critical, to an organisation’s IT service delivery.

It is vital that these services are independent of the providers themselves but that providers either provide visibility into their service or organisations can contractually ensure that they do.

CMaaS provides integration with public cloud services (e.g. Office 365, Salesforce, Huddle, Google Apps), as well as IaaS and PaaS services (e.g. Microsoft Azure, AWS and Google’s App Engine).

Some services can now do this from a single pane of glass. It can also be used to monitor in-house environments and hosted and private cloud services by deploying or installing gateways into the monitored environment.

7. Moving services between different providers

At present, not many people are dynamically moving workloads between cloud providers, but we expect to see this become more common as users become more familiar with the benefits of cloud and compare the offerings of different providers.

Providers may then respond with competitive pricing, such as we see in the utility sector. Organisations therefore need to design their cloud services with the flexibility to adopt different platforms or alternative cloud suppliers quickly and with minimum impact to existing services, or risk swapping one legacy infrastructure for another.

8. Open source

Most major cloud providers use open source for their services, and for even medium sized organisations it now provides high quality tools to that can host, manage and integrate providers with a committed, if slightly disorganised support network. For those not ready to fully commit, most tools are available for the cost of a standard Linux distribution.

>See also: 6 drivers for moving business to the cloud

9. Securing and auditing services

Moving data to the cloud does not negate the need for an organisation to take proper data security precautions. This means taking responsibility for asking the service provider to deliver the appropriate levels of information security and measuring and auditing the supplier to ensure that the relevant security is applied.

Organisations will become much more sophisticated in the way they evaluate potential cloud suppliers, seeking out independent verification of their capabilities and looking more closely at their governance and data security policies. This will become ever more important in the light of the forthcoming GDPR regulations, and a written definition of all the data security policies and procedures will be required by the regulator when they conduct an audit.

10. New cloud services to address specific issues

As cloud grows in capability and scale, we can expect to see an increasing number of new applications, whose scope is limited only by the ingenuity and vision of cloud service providers. While some will be targeted at niche markets, others will address common problems.

One of the fastest growing services is likely to be patch management, which removes the administrative overhead of ensuring IT systems remain compliant and secure and can quickly address zero day vulnerabilities across both public cloud and on-premise equipment.

Other services beginning to gain traction include identity management as a service, already in use by a major government agency among others, and endpoint data protection and compliance, which provides backup, restoration, compliance and legal hold across all user devices and encrypted data storage on public cloud.

Sourced from Richard Blanford, managing director, Fordway

How to boot multiple Linux distros from one USB – TechRadar

Many Linux distribution (distro) ISO files you can download, or are included on our sister title Linux Format’s cover DVDs, are what are known as hybrid files. 

This means that not only can they be written to a CD or DVD in the normal way but they can also be copied to a USB stick with dd. The USB stick will then boot as if it were a DVD.

This is a handy way of creating install discs for computers that don’t have an optical drive, but it has one significant drawback: Each ISO image requires a USB flash drive to itself.

With USB sticks holding tens or even hundreds of gigabytes costing only a few pounds, and small drives becoming harder to find, this is a waste of space both on the stick and in your pocket or computer bag.

Wouldn’t it be good to be able to put several ISO files on the same USB stick and choose which one to boot? Not only is this more convenient than a handful of USB sticks, it’s both faster and more compact than a handful of DVDs.

The good news is that this is possible with most distros, and the clue to how it’s done is on Linux Format’s cover DVDs each month. They used to laboriously unpack distro ISOs onto the DVD so that they could boot them and then had to include scripts to reconstruct the ISO files for those that wanted to burn a single distro to a disc. 

Then they started using Grub to boot the DVD, which has features that make booting from ISO files possible. The main disadvantage of this approach, at least for the poor sap having to get the DVD working, is that different distros need to be treated differently and the options to boot from them as ISOs is rarely documented.

We will show you how to set up a USB stick in the first place and the options you need for the favourite distros. We will also show you how to deal with less co-operative Linux distros.

Use GParted or one of the command-line tools to prepare your flash drive. Giving the filesystem a label is important for booting some distros ISOs

EFI booting

In this instance, we’ve created a flash drive that uses the old style MBR booting.

While most computers of the last few years use UEFI, they still have a compatibility mode to boot from an MBR.

So this makes our stick the most portable option, but if you need to boot your stick using UEFI, change the grub-install command to use the UEFI target, like this:

$ sudo grub-install –target=x86_64-efi–boot-directory=/media/MULTIBOOT/boot /dev/sde

This is a 64-bit target, as UEFI is only fully supported on 64-bit hardware. If you want to use your USB stick with 32-bit equipment, stick with the MBR booting method.

Setting up the USB stick

First, we need to format the USB stick. We will assume that the USB stick is set up with a single partition, although you could use the first partition of a multi-partition layout. 

What you cannot get away with is a stick formatted with no partition table, as some are. If that’s the case, use fdisk or GParted to partition the drive, then you can create the filesystem. 

The choice of filesystem is largely up to you, as long as it is something that Grub can read. We’ve used FAT and ext2 (there’s no point in using the journalling ext3 or ext4 on a flash drive). Use whatever fits in with your other planned uses of the drive, we generally stick with FAT as it means we can download and add ISO images from a Windows computer if necessary.

Whatever you use give the filesystem a label, we used MULTIBOOT, as it will be important later.

In these examples, the USB stick is at /dev/sde (this computer has a silly number of hard drives) and the filesystem is mounted at /media/sde1, amend the paths to suit your circumstances. 

First, we install Grub on the stick to make it bootable:

$ mkdir -p /media/MULTIBOOT/boot
$ sudo grub-install --target=i386-pc --boot-directory=/media/MULTIBOOT/boot /dev/sde

Note: the boot-directory option points to the folder that will contain the Grub files but the device name you give is the whole stick, not the partition. Now we create a Grub configuration file with:

$ grub-mkconfig -o /media/MULTIBOOT/boot/grub/grub.cfg

This will create a configuration to boot the distros on your hard drive, so load grub.cfg into an editor and remove everything after the line that says:

### END /etc/grub.d/00_header ###

If you are creating a flash drive to share, you may want to look at the theme section of the Grub manual to make your boot screen look prettier

Adding a distro

This gives us a bare configuration file with no menu entries. If we booted from this stick now, we would be dropped into a Grub shell, so let’s add a menu. 

We’ll start with an Ubuntu ISO because they are popular (sorry, but they are) and because they make booting from an ISO file easy (after all, it’s Ubuntu, it makes most things easy). Load grub.cfg back into your editor and add this to the end of the file:

submenu "Ubuntu 16.04" {
 set isofile=/Ubuntu/ubuntu-16.04-desktop-amd64.iso
 loopback loop $isofile
 menuentry "Try Ubuntu 16.04 without installing" {
  linux (loop)/casper/vmlinuz.efi file=/cdrom/preseed/
 ubuntu.seed boot=casper iso-scan/filename=$isofile quiet
 splash ---
  initrd (loop)/casper/initrd.lz
 menuentry "Install Ubuntu 16.04" {
  linux (loop)/casper/vmlinuz.efi file=/cdrom/preseed/
ubuntu.seed boot=casper iso-scan/filename=$isofile onlyubiquity
quiet splash ---
  initrd (loop)/casper/initrd.lz

Create the Ubuntu directory on the drive and copy over the ISO file. Then unmount the drive and reboot from the stick. 

You should see a Grub menu with one entry for Ubuntu that opens up to reveal boot and install options,

This is the basic menu you get with a default Grub configuration—functional but not very pretty

Special options

The first line creates a variable containing the path to the ISO file. We use a variable because it means we only need to make one change when we want to adapt the menu to a different release. 

The second line tells Grub to mount that as a loop device (a way of mounting a file as if it were a block device). 

Then we have the two menu entries. You may be wondering how do we know what options to add to the menu entries. That comes from a combination of looking at the ISO’s original boot menu and knowing what to add for an ISO boot.

The latter, in the case of Ubuntu, is to add


where the variable isofile was set to the path to the file a couple of lines earlier. To see the original boot menu, we need to mount the ISO file, which is done like this:

$ sudo mount -o loop /path/to/iso /mnt/somewhere

Most ISOs use isolinux to boot so you need to look at the CFG files in the isolinux or boot/isolinux directory of your mounted ISO file. 

The main file is isolinux.cfg but some distros use this to load other CFG files. In the case of Ubuntu, this is in a file called txt.cfg. You’re looking for something like:

label live
 menu label ^Try Ubuntu without installing
 kernel /casper/vmlinuz.efi
 append file=/cdrom/preseed/ubuntu.seed boot=casper
initrd=/casper/initrd.lz quiet splash ---

The kernel setting translates to the Linux option in Grub with the addition of (loop) to the path. Similarly, the initrd part of the append line corresponds to Grub’s initrd line.

The rest of append file is added to the Linux line along with the isoscan option. This approach will work with most distros based on Ubuntu, although some have removed the ISO booting functionality for some reason. It’s possible to add this back, as we will see shortly.

Linux Gaming In 2016: 1000+ Games Released On Steam With Linux Support – Fossbytes

Short Bytes: There’s no denying the fact that Linux systems aren’t the number 1 choice of a hardcore gamer. But, the scenario is changing and the same trend is reflected by the number of games released with Linux support on Steam. Overall, in 2016, 1,018 games were released on Steam with Linux support.

Do you remember the time when people didn’t even consider Linux-based machines for playing computer games with impressive graphics? Well, times have changed and Linux kernel developers and distribution vendors are putting serious efforts into adding better support to modern GPUs and their drivers.

Popular Linux gaming news website Gaming on Linux recently published the 2016’s Linux gaming overview. This year, more than 1,000 games have been released on Steam with Linux support.

According to the exact count in the overview, 1,018 games have been released this year.

Another interesting piece of data was shared by @Steam_Spy, who tells that 38% of games on Steam have released this year:

This percentage and count will surely rise next year. The Linux gaming scenario is improving each year and so is the quality of games.

Well, do Fossbytes readers use Linux machines for gaming purposes? Don’t forget to tell us your views and system configuration in the comments section below.

Also Read: 9 Best Text Editors For Linux And Programming | 2017

Serious Ubuntu Linux desktop bugs found and fixed – ZDNet

If you, like me, use Ubuntu desktop, or one of its relatives such as Linux Mint, you have a bug to patch.

Basic Apport Prompt

If you’re using unpatched Ubuntu Linux, this usually innocent error message may be hiding an attack.

Donncha O’Cearbhaill, an Irish security researcher, found a remote execution bug in Ubuntu. This security hole, which first appeared in Ubuntu 12.10, makes it possible for malicious code to be injected into your system when you open a booby-trapped file. This can be used to crash your system or run malware. It does not — a small blessing — enable attackers to become the root user.

More security news

O’Cearbhaill found that Ubuntu will open any unknown file with Apport if it begins with “ProblemType: “. Apport is Ubuntu’s default crash handler and crash reporting program. So far, so good.

Apport in turn generates a crash file with the unusual “.crash” extension and a magic byte sequence. Magic bytes are the unique sequences meant to identify a file. For example, a PDF document without a PDF extension can still be identified as PDF by its hexadecimal magic byte sequence: “25 50 44 46.”

Magic bytes, of course, can be abused and that’s in part what’s happened here. When Ubuntu is presented with an unknown file it will first try to match its Multipurpose Internet Mail Extensions (MIME) extension. If that fails, it will fallback to matching the magic bytes.

So, an attacker can create a file with the Apport magic bytes indentification. Now you would not nomally open a file with the extension “.crash”, but you might open a file without an extension. If you do, Apport will open it and then displays a minimal crash report prompt. If you elect to “Show Details”, you’ve just opened yourself up for an attack.

That’s because within the bogus Apport crash file, a hacker can use the Apport Crash Report Format to hide a demand to run a Python program listed in the CrashDB field. This command will then be parsed and executed without any further user interaction.

In short, Apport doesn’t properly sanitize the Package and SourcePackage fields in crash files before processing them.

Adding insult to injury, another bug, of the Path Traversal family, enables an attacker to run Python files to cause even more trouble. In practice this means that: “An attacker could serve plant a malicious .py file and a crash file in the users Download directory to get code execution.”

Worse still, if the user has a user ID (UID) of 500 or less, Apport will use Polkit (formerly PolicyKit) to prompt the desktop user for root privileges with a generic “System program problem detected” message. If you do so, congratulations. You’ve just granted the attacker the ability to run commands as root.

The good news is that the problems have been patched. So, now that you’re almost done reading this, patch your system already.

The bad news is there still aren’t enough eyes looking at older open-source code for overlooked security vulnerabilities.

Worst still, as O’Cearbhaill points out, “The computer security industry has a serious conflict of interest right now. There is major financial motivation for researchers to find and disclose vulnerability to exploit brokers. Many of the brokers are in the business of keeping problems unfixed. Code execution bugs are valuable. As a data point, I received an offer of more than 10,000 USD from an exploit vendor for these Apport bugs. These financial motivators are only increasing as software gets more secure and bugs become more difficult to find.”

The answer? Don’t simply hope programmers will work for the common good. Instead, O’Cearbhaill believes companies should support vulnerability reward programs such as The Internet Bug Bounty project.

Related Stories:

3 Ways Cloud Computing Improves Manufacturing Business Agility – Business Solutions Magazine

Dan Johnson, President, Visual Business Solutions

By Dan Johnson, President, Visual Business Solutions

The manufacturing industry has always been on a quest to reduce cost, increase productivity, and minimize waste. Over the years, business strategies that have attempted to achieve this trifecta of manufacturing goals have come and gone. The Lean concept was one of the first that helped companies successfully streamline operations. It was followed by Agile which, as its name suggests, went beyond reducing waste and increasing efficiency to add flexibility and versatility to manufacturing capabilities, helping companies not only run leaner and meaner, but with an increased agility to help them quickly respond to market changes and competitors.

In today’s increasingly digitized manufacturing environment and hyper-speed markets, it has become harder for manufacturers hamstrung by premise-based legacy systems to meet rapidly-changing consumer preferences and product development requirements.

Enter the cloud.

Suddenly, manufacturers have operational options. Cloud computing seemed custom-made for manufacturers who had embraced Agile but ran up against an IT wall when they tried to quickly scale using a premise-based IT infrastructure and static ERP system. When Agile met cloud computing, the IT handcuffs came off and manufacturers were able to take all those previously streamlined, Lean processes, strip away the resource intensive, premise-based infrastructure and turn them into industrial-production-systems-as-a-service. Using a cloud-based IT ecosystem enables Agile companies to quickly and efficiently scale computing and database requirements up or down as customer or market demands require.

If your company has not yet gone to the cloud, here are three ways cloud computing can improve your business agility and efficiency.

1. The Cloud Reduces Spend, Increases Productivity

The cloud enables manufacturers to spend less time and money managing and upgrading their premise-based IT and more time growing their businesses. A cloud-based IT environment is easily scalable to handle all business processes as well as product development and manufacturing, without the high cost of internal IT infrastructure.

2. Cloud-Based Software-as-a-Service (SaaS) Is Automatically Upgraded

Unlike premise-based IT systems, cloud-based software is automatically upgraded and proactively maintained for optimum performance. In addition to keeping your software optimized, SaaS helps keep your processes compliant with current industry requirements and government regulations.

3. Cloud Computing Fuels Growth

Manufacturers can respond quickly and easily to competitive pressure and market trends. The cloud enables companies to expand existing operations or even open new business units without having to expand or integrate IT infrastructure, substantially minimizing both capital investment and risk.

Cloud computing provides true manufacturing agility with on-demand, real-time access to data and resources, reduced costs and risk, as well as marketing and production scalability. The flexibility of the cloud and the speed of deployment make manufacturing systems particularly well suited to cloud-based ERP solutions and supply chain management for seamless, end-to-end business process and production integration.

With all this talk about manufacturing processes and systems, it’s easy to overlook the human factor. But cloud computing can have a profound, positive affect upon the manufacturing workforce, enabling workers to do their jobs better, faster, and with greater flexibility by removing the static restraints of proprietary, premise-based systems. Workers today are already used to living in the cloud with smart phones, Wi-Fi-enabled devices, mobile apps and other Internet-powered tools that seem to impact every aspect of everyday living.

Today’s workers interact easily with Internet-based connectivity and the cloud, so to perform their jobs there is an intuitive extension requiring minimal training, unlike having to learn an arcane, premise-based legacy system that has been customized and patched over time. Employees are better able and more inclined to fix problems on their own if they no longer have to constantly enlist IT’s help.

Cloud computing is here to stay. As technology continues to evolve, so does manufacturing, and for companies still struggling with rapidly-aging, premise-based IT systems, cloud computing is the logical next step that will help them stay agile, competitive, and growing.

Dan Johnson is President of Visual Business Solutions whose goal is to guide clients along a clear and practical path to who they want to be. Dan is a dynamic and results-driven executive with an extensive background in new business development, strategic planning/relationships, market development, and operations management. He is a true expert in anticipating, meeting, and exceeding customer’s needs. In previous roles, Dan served in executive roles leading the sales function, in several corporations, and was noted for delivering record-breaking results in business growth and profitability.

HPE, Cisco maintain lead in cloud infrastructure equipment market – Cloud Tech


The competition in cloud seems tough as organisations are girding well to grab the lion’s share in the market. The new Q3 data from Synergy Research Group shows Hewlett Packard Enterprise (HPE) maintained a narrow lead over rival Cisco in the strategically important cloud infrastructure equipment market.

The data showed Dell EMC challenging the top two after the completion of their historic merger. 

At the same time, original design manufacturers (ODMs) or contract manufacturers, in aggregate, are continuingly increasing their share of the market, driven by continued heavy investments in data centres by hyper-scale cloud providers. Microsoft and IBM complete the group of top cloud infrastructure vendors.

According to Synergy, HPE and Cisco have been in a closely contested leadership battle in the cloud market for the last sixteen quarters, over which time their total revenues are virtually identical.

And across the different types of cloud deployment, Cisco continues to hold a commanding lead in public cloud infrastructure while HP has a clear lead in private cloud.

Revenues continue to grow

The research firm predicts the total cloud infrastructure equipment revenues, including public and private cloud, hardware and software, to reach $70bn in 2016 and continue to grow at a double-digit pace.

Servers, OS, storage, networking and virtualisation software, all combined, accounted for 94% of the Q3 cloud infrastructure market and the balance comprised cloud security and cloud management.

Moreover, HPE leads the cloud server segment and is a main challenger in storage, while Cisco is dominant in the networking segment and also has a growing server product line.

Dell EMC is the second-ranked server vendor and has a clear lead on storage. Microsoft features heavily in the ranking due to its position in server OS and virtualisation applications, while IBM maintains a strong position across a range of cloud technology markets.

John Dinsdale, a chief analyst and research director at Synergy Research Group: “Growth in private cloud infrastructure is slowing down as enterprises shift more attention and workloads to the public cloud, but that means that there is a continued boom in shipments of infrastructure gear to public cloud providers.

For traditional IT infrastructure vendors there is one fly in the ointment though — hyperscale cloud providers account for an ever-increasing share of data centre gear and many of them are on a continued drive to deploy own-designed servers, storage and networking equipment, manufactured for them by ODMs.

ODMs in aggregate now control a large and growing share of public cloud infrastructure shipments.”

Cloud Computing: One-Third of Enterprise IT Budgets Spent on Cloud – Formtek Blog (blog)

By Dick Weisinger

Enterprises are expected to spend on average 34 percent of their IT budgets on cloud hosting and services in 2017, according to 451 Research.  That’s up from 28 percent 2016.

The breakdown of cloud spending is as follows:

  • 31 percent – infrastructure
  • 42 percent – application services
  • 14 percent – managed services
  • 9 percent – security services
  • 5 percent – professional services for cloud enablement

Liam Eagle, research manager at 451 Research, said that “the market for managed infrastructure and application services is a longer tail market, with greater opportunities for providers who emphasise expertise in operating, optimizing and securing the infrastructure and application products they deliver.”

Your cloud strategy must include no-cloud options – InfoWorld

It’s Monday morning. You go into work as if it’s a normal day, but the office is quiet. There’s a press release on your desk stating that your company has put its datacenters up for sale, and cloud will be its new platform.

This strategic shift is increasingly commonplace now. Although it makes the company look innovative and thrifty to the stock holders, it also leaves the challenge to IT to make it all work.

The shift to the cloud is very sensible. Most applications have a place on the public cloud—that is, if they are modern enough to use the public cloud as a platform. Indeed, in some cases the application workloads don’t have to be modified or need very little modification. Migrating to the cloud often makes sense.

But often is not always. Some applications—about 20 to 40 percent—don’t have good platform analogs in public cloud platforms. Most are built using older mainframe or minicomputer technologies; although they’re not pretty to look at, they run the business nonetheless. Modifying those applications to run in the cloud is cost-prohibitive.

How an old Drawbridge helped Microsoft bring SQL Server to Linux … – Ars Technica

When in March this year Microsoft announced that it was bringing SQL Server to Linux the reaction was one of surprise, with the announcement prompting two big questions: why and how?

SQL Server is one of Microsoft’s major Windows applications, helping to drive Windows sales and keep people on the Windows platform. If you can run SQL Server on Linux, well, that’s one less reason to use Windows.

And while SQL Server does share heritage with Sybase SQL Server (now called SAP ASE), a Unix database server that Microsoft ported to Windows, that happened a long time ago. Since 1993, when Sybase and Microsoft went their separate ways, the products have diverged and, for the last 23 years, Microsoft SQL Server has been strictly a Windows application. That doesn’t normally make for easy porting.

Reaching the customers where they are

Even as a Windows-exclusive offering, SQL Server has widespread adoption, taking a substantial chunk of the paid database market. But being Windows-specific has meant that, equally, there was always a proportion of the market that SQL Server couldn’t reach. It doesn’t much matter what the merits of the database engine really are; it was off-limits to organizations that standardized on Linux.

Rohan Kumar, general manager of database systems at Microsoft, told us that there had long been some level of demand from customers who wanted the flexibility to pick a database without being forced onto a specific operating system but that over the past two to three years these calls had become louder. Containerization and Docker, in particular, were cited as stimulating this demand; organizations want to take advantage of these features to streamline their deployments and management, but at the same time, they want SQL Server’s features and its not-as-cripplingly-expensive-as-Oracle licensing.

(Windows Server 2016 supports Docker-compatible containers, too, but the broader point remains: companies want to be able to take advantage of platform features as they become available without tying their hands when it comes to decisions about applications.)

In the past, Microsoft might have ignored these requests. In the past, Microsoft did ignore these requests. But the company is changing and is increasingly going after opportunities that it might once have ignored. This has taken many forms over the past few years. The release of Office for iPad, for example, cedes what might otherwise have been a unique advantage for Windows-based platforms, but the opportunity to consolidate Office’s position across the industry and strengthen the Office 365 subscription value proposition has outweighed that concern.

Reaching more customers with more features

SQL Server is seeing two changes that we might not have expected from the Microsoft of old. One of these is already available today; with Service Pack 1 for SQL Server 2016, Microsoft radically changed the way the database’s different versions and price points worked. Traditionally, the different variants of SQL Server—Express, Standard, Enterprise—have offered two kinds of variation. They have different levels of scalability—Express is limited to a certain amount of RAM and certain sizes of database—and different features. Some high-end features, such as extensive encryption support, were unavailable in the cheaper versions of the database.

With Service Pack 1, the feature differences were substantially eliminated, leaving only the scalability differences. This means that even the free Express version, for example, has the same encryption capabilities as the expensive versions; it just can’t be used to host such big databases.

Scott Guthrie.

Scott Guthrie.

Scott Guthrie, executive vice president for cloud and enterprise, told us that this kind of change came with some risks, but it also offered opportunities. He pointed to Visual Studio Community Edition, released in 2014, as a kind of precursor to this. Making a fully featured version of Visual Studio available for free was a decision that could have cost revenue as paying developers moved to the free version (prior to Community Edition, the Express Edition products were free, but these had certain functional constraints not found in Community Edition), but Guthrie told us that the company had actually seen Visual Studio revenue grow. Teams and companies that otherwise might not have used Visual Studio at all used the product and became drawn to it.

Guthrie expresses a similar hope for Service Pack 1. Sure, some companies may have been paying for SQL Server solely to access features not found in the Express SKU, and they may now save that money, but the company hopes that this will be offset by developers making greater use of SQL Server in general, making greater use of SQL Server’s complex features (ones previously available only in expensive versions), and seeing increased sales as these new applications need the scalability that the paid SKUs provide.

Kumar told us that expectations for SQL Server on Linux were similar. There was some nervousness and trepidation when the decision was first made to bring SQL Server to Linux—would it simply cut Windows revenue as companies switched to the cheaper server operating system?—but as the project progressed, the team grew more confident that it would open up SQL Server to a whole set of customers who previously couldn’t even contemplate it because of its OS dependency.

Moreover, he said that without offering this kind of flexibility, there were customers using SQL Server that were likely to defect to other options anyway to reduce their platform lock-in. SQL Server for Linux carries risks, but so does keeping SQL Server restricted to Windows. The opportunity was felt to outweigh these concerns.

The initial response has been encouraging, with 21,000 people signing up to use the Linux preview and 3,000 to 4,000 using it extensively.

Lowering the Drawbridge

The decision to go ahead with SQL Server for Linux was made about 18 months ago. The question then became, how to do it?

It was seen as essential that SQL Server on Linux have identical semantics and performance to SQL Server on Windows, right down to the level of database file compatibility. This would be very difficult to do if the software were forked to have two separate versions each with their own approach to file I/O, memory management, threading, and so on.

But SQL Server is a large application, and although its interactions with Windows are relatively narrow—things like the graphical management tools are remaining Windows-only, at least for the time being, so a large part of the Windows API surface is avoided—it still uses about 1,500 Win32 API calls. Supporting all of these on Linux would be a major undertaking.

Drawbridge runs an almost complete operating system within a process.

Drawbridge runs an almost complete operating system within a process.

The first piece of the puzzle was a Microsoft Research Project that was completed in 2011 called Drawbridge. The Drawbridge project explored a new approach to process virtualization and isolation with two major elements: a picoprocess, which is a lightweight process that has access to a small number of about 50 low-level kernel features, and a Library OS (LibOS), which is a modified operating system stack designed to run within a picoprocess.

An application and LibOS run within the picoprocess together, with LibOS providing all the operating system-like functionality that the application depends on, such as threading, virtual memory management, and a full set of file I/O features. LibOS talks to the underlying kernel using those 50 or so API calls.

This approach offers much of the isolation and security that virtual machines offer—each picoprocess has its own LibOS, so even if the application is compromised and the kernel attacked, it’s only the LibOS “kernel” running in user mode, rather than the real kernel running below—but with lower overheads. LibOS doesn’t need to provide an entire operating system’s worth of functionality; it can be restricted to only the APIs and services that a specific application needs. This cuts the per-application memory and processor overhead when compared to running each individual application within a virtual machine.

Though the Drawbridge project ended in 2011, Microsoft has drawn inspiration from it already. The picoprocess concept is used for the Windows Subsystem for Linux. Linux processes on Windows are “empty” processes that omit the usual Windows libraries. In fact, they can’t even make Windows kernel function calls. WSL doesn’t use a Library OS. Instead, it uses a kernel-mode component that offers the WSL picoprocesses the ability to make Linux kernel function calls; that component offers a subset of the Linux API built on top of the Windows NT kernel.

SQL Server’s operating system on top of an operating system

The second major element to SQL Server for Linux is a piece of work that the SQL Server team did for SQL Server 2005, called SQLOS.

Dependent as it was on Windows, SQL Server is actually something of an anomaly when it comes to Windows programs, because it avoids using Windows as much as it can. Windows, as a general purpose OS, offers all manner of functionality. It has a filesystem, with complex data caching. It has a thread scheduler, designed for everything from single core machines up to monstrous servers with hundreds of cores; it has a memory manager that works with anything from a browser on a machine with 2GB RAM to a database on a machine with 1TB RAM. It spans a huge range of configurations and workloads, and it does so with a single kernel that works the same way (albeit with a small number of tunable options) across the board.

SQL Server, however, is a performance-critical application that often (though not always) runs on dedicated hardware. The engineers understand its workload very well and want to optimize for that workload and that workload alone. Accordingly, they’ve designed SQL Server to handle many of the tasks that the operating system would normally handle.

For example, most Windows applications use the Windows file cache to improve disk I/O performance. SQL Server, however, uses unbuffered I/O, bypassing the Windows cache entirely, and instead implements its own internal caching system. Similarly, SQL Server handles scheduling and switching between threads, another task traditionally handled by the OS.

SQLOS (called "SOS" in this diagram) also replicates much of an operating system's functionality.

Enlarge / SQLOS (called “SOS” in this diagram) also replicates much of an operating system’s functionality.

Thanks to SQLOS, the core SQL Server engine doesn’t actually depend on Windows very much. SQLOS uses a few low-level features from the Windows kernel and then implements its own “operating system” on top of them.

In other words, SQLOS is doing a lot of the kind of work that LibOS does.

Not every part of SQL Server uses SQLOS. The SQL Server Agent service, used for scheduling tasks within the database, relies on regular OS handling of file I/O and process scheduling and so on. So too do elements such as SQL Server’s XML support (which uses Microsoft’s MSXML XML library) and SQLCLR, the .NET runtime embedded within the database. These need something closer to full Win32 to run—and again, that sounds a lot like LibOS as it provides a version of Win32, or at least a large part of it.

Taking the best of both worlds

Accordingly, Microsoft took these ideas and merged them. The final snapshot of the LibOS code from Drawbridge was merged with SQLOS, producing what Microsoft calls SQLPAL, the SQL Platform Abstraction Layer. Whereas the Drawbridge LibOS offered Windows-style thread scheduling, I/O management, and so on, SQLPAL took the SQL Server-optimized code from SQLOS for these features, creating a kind of cut-down version of Win32 with its performance tailored to SQL Server’s needs.

SQLPAL’s Win32 support is not complete—around 1 percent (81MB) of Windows libraries are used, with SQLPAL itself being another 8MB—but it’s sufficient to run MSXML, SQL Server Agent, and many of the other essential parts of SQL Server. While these components used the regular Windows scheduler, I/O, and memory management in older SQL Server, with SQLPAL even these parts now use the optimized, specialized routines.

Drawbridge's LibraryOS combined with SQLOS gives us SQLPAL.

Drawbridge’s LibraryOS combined with SQLOS gives us SQLPAL.

Beneath SQLPAL is a layer called a “host extension” (the equivalent in Drawbridge was called a Platform Abstraction Layer) that provides the bridge between the underlying platform, either Linux or Windows. Everything above the host extension is common code; the host extension has Win32 and Linux versions.

Indeed, the things running on top of the host extension are not merely common code; they’re common binaries. SQL Server on Linux is not a Linux executable in Linux’s ELF format. It’s a Windows executable, in Windows’ PE format. You could in principle take it and run it on Windows.

The design means that in broad strokes, the performance of SQL Server on Linux and SQL Server on Windows will be matched; the underlying OS is in many regards bypassed completely except for the set of calls that the host extension makes. However, a few pieces are performance critical, such as disk I/O. In these areas, Microsoft has cut back the amount of code between SQL Server and the underlying platform, enabling SQL Server to make more or less direct calls to the underlying operating system APIs. In some cases this requires a small amount of translation code—for example, SQL Server is built to use Windows’ scatter/gather I/O, which uses different memory layouts to Linux’s vectored I/O, so there must be conversion between the two—but often even this isn’t needed.

Work in progress

In the current preview release, the merging of SQLOS and LibOS is still ongoing. Currently, SQL Server includes its own SQLOS, sitting atop the SQLPAL. Microsoft is working to remove SQLOS and make SQL Server call into SQLPAL directly. When this is done, there will be a single SQLPAL with both SQL Server and all its ancillary processes and libraries running on top of it.

There are still parts of SQL Server that won’t be in the initial stable release of SQL Server for Linux. Some of these, such as a SQL Server feature called FileTables, are tricky because they depend on specific features of the NTFS filesystem. Similarly, integrated Windows authentication depends on Windows. It’s not clear how these features will be brought to Linux, if indeed they are. Other missing features, such as Full Text Search and Reporting Services, are likely to be brought over eventually; they just require additional developer time and so won’t be ready in time for the initial release. The long-term goal is to have every part of SQL Server that makes sense to run on Linux running on Linux.

Testers of the preview have even expressed a desire for this to include SQL Server’s graphical management tools. Currently, SQL Server for Linux offers only command-line-based management. The graphical tools run on Windows only at the moment, though they can be used to manage the database running on both operating systems.

To be truly credible as a Linux database, there’s more work than just porting SQL Server itself. Application support is also important. Microsoft has taken some steps toward this already—for example, the JDBC driver, used to access SQL Server from Java applications, was recently open sourced—but there’s much more work to do to ensure that SQL Server is on the same footing as other Linux database engines when it comes to support from management front-ends and software libraries. Kumar told us that this is an area of ongoing effort and that the importance is recognized.

With SQL Server on Linux, the Drawbridge project has been one of Microsoft’s more obviously fruitful pieces of research. Both the picoprocess concept and the LibOS concept have been used and are—albeit separately from one another—or will be, part of shipping software. Early indications suggest strong interest in SQL Server for Linux, and while the company is still a little way off, the end goal of having just “SQL Server,” rather than SQL Server for Windows and SQL Server for Linux, looks attainable.

It's Been a Bad Week for Linux as Several Security Flaws Surface – BleepingComputer

Two security researchers published details this week about several security flaws that allow attackers to execute code on affected machines and take over devices.These security flaws affect Linux distros such as Fedora and Ubuntu, and two of these exploits are zero-days, meaning there’s no patch to prevent attacks.Zero-days discovered affecting Fedora and UbuntuThe first to publish his research was Chris Evans, who disclosed the two zero-days affecting Gstreamer, an application responsible for indexing and generating thumbnails and previews for files in various Linux desktop environments.Evans says that an attacker can host a malicious audio file online that when the user downloads on his computer, will automatically be indexed by Gstreamer.The file, either a FLAC or MP3, would tell Gstreamer that it’s a SNES music file. Because Gstreamer comes with support for playing these files, it will emulate a SNES (Super Nintendo Entertainment System) and attempt to index the file.The libraries part of Gstreamer tasked with this operation include vulnerabilities that allow the attacker to execute code on the user’s machine.This occurs when the file contains malicious instructions telling Gstreamer to emulate a SNES with a Sony SPC700 audio processor. Additionally, Gstreamer isn’t sandboxed, so any code executed via the framework has access to the OS, with the user’s native privileges.Evans has tested his attack scenario on Fedora 25 and Ubuntu 16.04 LTS distros but says that other Linux versions might be affected as well. He also recorded two videos of his exploit in action.

After ignoring Linux for years, Adobe releases Flash 24 for Linux – Ghacks Technology News

Adobe has just released the first final Adobe Flash Player stable release, Flash Player 24, for GNU/Linux in years.

The company announced back in September 2016 that it would bring back Flash for Linux from the dead. This came as a surprise as it had ignored Linux for the most part when it comes to Flash.

Adobe promised back then that it would provide a Linux version of Adobe Flash Player that would be in sync with the company’s regular Windows and Mac releases of Flash Player.

A beta release of Flash 23 was released at the time with the promise that a final version would be made available.

This beta version was only available through the Adobe Labs website. Once installed on a device running Linux, browsers like Firefox or Pale Moon would pick up the plugin automatically giving users options to run most Flash content on the Internet.

Most? Adobe stated back then that the Linux version of Flash Player would not support some features, GPU 3D acceleration and support for premium video DRM for instance. The company recommended the Chrome web browser and its integrated version of Linux for that as it does not have that limitations.

flash player 24 linux

flash player 24 linux

If you point your web browser to the Adobe Flash Player download site, and there more precisely on the download site for “other versions“, you will notice that Flash Player 24 Final for Linux is now provided as a download.

This is the same version that is offered for Windows and Mac operating systems as well. You can select 32-bit or 64-bit Linux from the menu, and pick one of the available versions afterwards.

The following versions are listed currently:

  • Flash Player 24 for Ubuntu (apt)
  • Flash Player 24 for Linux (YUM)
  • Flash Player 24 for Linux tar.gz both as PPAPI and NPAPI
  • Flash Player 24 for Linux rpm both for PPAPI and NPAPI

The release means that Adobe is offering the same Flash Player version for Windows, Mac and Linux once again.

The release comes at a time when browser makers such as Google, Mozilla and Microsoft are slowing phasing out support for plugins and thus also Flash.  The companies have or will set Flash to click to play to block Flash content from loading automatically. The next step would be to remove support for Flash altogether, but this will probably not happening in the next two or so years considering that there are still plenty of sites out there that require Flash to work properly.


Article Name

After ignoring Linux for years, Adobe releases Flash 24 for Linux


Adobe has just released the first final Adobe Flash Player stable release, Flash Player 24, for 32-bit and 64-bit GNU/Linux systems in years.


Martin Brinkmann


Ghacks Technology News


About Martin Brinkmann

Martin Brinkmann is a journalist from Germany who founded Ghacks Technology News Back in 2005. He is passionate about all things tech and knows the Internet and computers like the back of his hand. You can follow Martin on Facebook, Twitter or Google+

Your smart home could also be a haven for hackers – The News Tribune

Your phone, fitness tracker and home thermostat all will soon have something in common.

As time goes on, more of these devices are upgraded for internet connectivity. While they can offer a world of convenience at our fingertips, experts say consumers are in for a world of hurt if they don’t take steps to thwart a hacker’s attempts to steal data or gain control.

Just two months ago, hackers harnessed internet-connected devices, such as video cameras and digital video recorders, in an attack on a key part of the internet’s infrastructure. The attack caused internet outages and congestion across a wide swath of the country, according to the tech blog Krebs on Security.

Experts fear internet-connected toasters, refrigerators and thermometers — collectively called the Internet of Things — can be conscripted into a virtual army by hackers if companies continue to create products with weak or no security protections.

It’s often impossible to know if your devices are insecure. In mid-December, a researcher posted about a vulnerability in several models of Netgear-branded routers could allow hackers to take control.

The U.S. CERT Coordination Center at Carnegie Mellon University rated the flaw as critical. Netgear began rolling out beta patch updates for the device Tuesday.


There are so many IoT devices they will soon rival the number of humans on planet Earth. Information technology research firm Gartner estimates 6.4 billion internet-connected things exist today, which could more than triple to nearly 20.8 billion devices by the year 2020.

These gadgets can exist on the spectrum of fun to absurd. One device allows users to play with their pets via an internet connected camera and a smartphone-controlled laser. Exercise trackers log our steps and our sleep. If you ever felt the need for a Wi-Fi-connected tray that tells you how many eggs you have in the fridge, that exists, too.

With connected devices, our world can be in the palm of our hands. Check temperatures from a smart oral and rectal thermometer from your phone. At the store and not sure if you have enough cheese? This smart refrigerator by Samsung now takes a picture of the contents every time the door is closed. Online retailer Amazon has offered small, Wi-Fi connected devices that order various products with the press of a button — these are part of the Internet of Things, too.

Some products are really helpful, said Ashish Gupta, chief marketing officer for Infoblox, a Santa Clara, California, technology firm that acquired Tacoma’s IID earlier this year.

“We drive up to Tahoe, which is about four hours away, and our house is freezing cold in the winter,” Gupta said. Enter the IoT thermostat, which his wife turns on with her smartphone two hours before they arrive. “When we get there, the house is nice and warm.”

But novelty and convenience can come at a cost. Device security is not keeping pace with innovation, the Department of Homeland Security wrote in a paper released last month.

“Because our nation is now dependent on properly functioning networks to drive so many life-sustaining activities, IoT security is now a matter of homeland security,” the agency wrote.

Devices, harnessed by hackers, were able to shut down the central heating and water systems last month at two apartment buildings in Finland. Last year, researchers found nine types of internet-connected baby monitors were vulnerable. They were able to view live video feeds, change camera settings and copy video clips stored online.

“When you’re dealing with things that stream data, whether it’s video or audio, to the internet, you have to look at your comfort level of who you are and what do you do,” said Deral Heiland, research lead at Rapid7, a company that in part seeks security vulnerabilities and reports them to product creators.

Jason Hong, an associate professor of computer science at Carnegie Mellon University in Pittsburgh, said beefing up security costs money, and companies want to maximize profits.

“A lot of people are rushing to market quickly,” Hong said. “It’s easy to see this product has a beautiful form factor and user interface. It’s hard to see if it’s got security built in.”


Users can take a few simple measures to minimize their risks, Hong and others said:

▪  Change the device or service’s default passwords. Manufacturers often ship products to consumers with what’s called a “factory default setting” with the same password. Changing the password makes it harder for hackers to access.

▪  Search online for the product name to see if any security flaws have been found.

▪  Unplug devices when they are not being used.

▪  Install all software updates for your device. Operating software can be a weak spot in device security.

▪  Consider why you need an internet-connected device in the first place.

Companies with decades of experience in other spaces are starting to enter the web-connected device market.

“Automobile companies are starting to realize they are also software companies,” Hong said. “Some of them don’t realize that — and that’s where the danger is.”

Tacoma partnership aims to thwart large cyberattacks

Infoblox and University of Washington Tacoma are researching ways to prevent the Internet of Things from interrupting services that affect the government, economy and day-to-day activities. Think traffic lights, power grids and the internet itself.

Using a type of artificial intelligence called machine learning, the partnership will help thwart IoT attacks.

“The IoT threat is not the future. It is here today,” said Ashish Gupta, chief marketing officer for Infoblox.

Devices made cheaply with weak encryption can allow hackers to take control and use them to shut down parts of the internet, called a distributed denial of service attack.

In short, the partnership’s algorithms are looking for the abnormal traffic.

“We’re looking for weird things, weird patterns,” said Anderson Nascimento, a UWT assistant professor. Based on that analysis, the program could eventually be able to isolate the compromised device and quarantine it.

“We see these abnormal behaviors in these devices and we shut them down in an automated manner.” Gupta said.

The New Microsoft and Its Partnership Strategy for Internet of Things (IoT) – 1redDrop (blog)

Microsoft’s new business mantra seems to be “If we can’t do it, let’s just find someone who can.” The Redmond tech giant announced a new partnership bringing global navigation and mapping company TomTom to integrate location-based services into their cloud platform Microsoft Azure, to power applications in IoT (Internet of Things) that need to be aware of their location.

Microsoft already has a long time partner in HERE, which powers the company’s location data in Bing, Cortana, Windows and Office, while geographic information system (GIS) technology company Esri is already integrated with many Azure applications.

Esri’s real-time GIS runs on Azure and can ingest any real-time, location-based data, including weather data, social media feeds, live sensor data and location services data from companies like HERE and TomTom.“ Microsoft

As the utility of connected devices expands along with the impending explosion of IoT deployments, geospatial data is becoming more important than ever. Microsoft’s cloud platform has all the necessary parts to work on the collected data, and will form the backbone of all such applications by bringing its computing power into play. The gap they needed to fill is the technology needed on the ground to get that data, where TomTom, Esri and HERE will have a huge role to play.

Internet of Things is a network of connected data-harvesting devices that can convert every measurable action into data and then collect, store, transmit, analyze and exploit that data to identify patterns and trends to ultimately create actionable intelligence – or, in some cases, cause actions to occur based on that intelligence.” Shudeep Chandrasekhar

To support the rapid growth of IoT deployments around the world, Microsoft is looking to enable location-based services for customers through an open platform capable of handling large data sets and native integration to help developers deploy these systems and manage them more effectively. The company’s vision is a platform that provides enhanced customization that can be implemented for smart cities and large-scale IoT systems across industries ranging “from manufacturing to retail to automotive.”

Microsoft outlines a variety of scenarios such as connected automobiles that can deep-integrate weather, traffic and mapping data into the driver’s schedule, while providing for custom-tailored routing capabilities based on their needs. For example, if the driver needs to make an important business call on the way to work, the system should be able to use multiple factors to determine the shortest commute that also has good cell coverage from his or her carrier.

There are any number of use cases for such intelligent systems that can combine huge amounts of data and convert that into the “actionable intelligence” that I wrote about in my article on Intel’s IoT push in Seeking Alpha several months ago. This actionable intelligence is the first step in allowing these systems to make personalized recommendations based on actual user needs rather than mere predetermined cookie-cutter scenarios that are relatively easy to plan for and implement.

The real challenge is in building a platform that has the compute ability to not just envision possible scenarios, but “pro-act” rather than “react” to real-time, real-world eventualities. That’s real AI as far as IoT systems are concerned.

The application possibilities are, indeed, tremendous. And by partnering with specialists in areas where they do not have the expertise, Microsoft is being extremely proactive in bringing these technologies to reality.

Thanks for reading our work! Please bookmark to keep tabs on the hottest, most happening tech and business news from around the world. On Apple News, please favorite the 1redDrop channel to get us in your news feed.

Difference Between Linux And BSD | Open Source Operating Systems – Fossbytes

Short Bytes: Linux and BSD are two open source operating system families inspired by the 20th-century operating system Unix. Several things set the two apart like hardware support, development philosophy, etc. Also, Linux is more popular than BSD.

When you start to get out of the Windows ecosystem, the very first thing you see is macOS. But, chances are less that you may go for it, mostly because of the price tag. Moving further, you come across Linux flaunting its open source badge. Most people confuse Linux as an operating system and it has been a topic of controversy for a long time. Thus, some people refer a Linux operating system as GNU/Linux.

Soon, you start realizing how diverse is the Linux ecosystem with numerous Linux distributions and their derivatives. You almost believe that Linux and its family is the representative of the open source community. But there is a lesser-known family of operating systems known as the BSD (Berkeley Software Distribution), which also counts as one of the major names in the open source community.

Difference between Linux and BSD

The biggest difference between Linux and BSD is that Linux is a kernel, whereas BSD is an operating system (also includes the kernel) which has been derived from the Unix operating system. The Linux kernel is used to create a Linux Distribution after stacking other components. Combine Linux kernel with GNU software and other components and you’ve got Linux ‘operating system.’ In the case of BSD, the makers create the complete operating system.

Read More: What Is A Linux Distribution? How Are All These Linux Distros Different?

Both Linux and BSD families have their representative or mascots. For Linux, it is Tux who is a penguin. There are stories related to how Tux became the Linux mascot and how he got its name.

BSD Daemon or Beastie, a cute-looking demon cartoon, is the face of the BSD family.

Choices in Linux and BSD

For Linux users, there are an uncountable number of distributions available. All of these are the derivatives of some popular Linux distributions including Debian, Gentoo, Red Hat, Slackware, etc. In addition to these, there are many standalone Linux Distributions like Solus, Puppy Linux, etc.

BSD operating system itself is defunct now, but it is used to refer existing family of BSD derivatives. The current BSD ecosystem revolves around three primary operating systems namely FreeBSD, OpenBSD, NetBSD; along with DragonFly BSD and other distributions. Out of these, FreeBSD is aimed at normal users and accounts for around 80% of the BSD installations.

If you are thinking that BSD is a lesser known name, macOS (earlier Mac OS X), the operating system present on Apple machines, is also a closed source descendant of the BSD family.

There are derivatives available for FreeBSD, NetBSD, etc., but their number falls short in comparison to Linux distributions.

Applications for Linux and BSD

linux-academy-coursesApplications for Linux are delivered in form of pre-compiled binary packages. Deb and RPM are the two main formats used for these packages which can be installed using package managers like APT, yum, pacman, etc.

The story is different in the case of FreeBSD where Ports are used to install applications on the operating system. There are currently more than 25,000 ports available in the FreeBSD Ports Collection.

Unlike the packages in Linux, these FreeBSD Ports contain the source code which needs to be compiled on the machine. This doesn’t make FreeBSD comfortable for normal users. However, precompiled binary packages – installed using pkg – have started to increase in numbers.

BSD has a scarcity of applications. Its developers have tried to control the situation by creating Linux compatibility package to run Linux applications on BSD.

There is one thing to note. Before you start considering FreeBSD as some other world thing, you would be satisfied to know that it also supports popular desktop environments like GNOME, KDE, etc and many other applications available for Linux.

UNIX Connection of Linux and BSD

It is a common notion that most of the operating systems existing in today’s world are in some sense related to the Unix. Unix was a closed source – yeah, you saw it correct – an operating system developed at Bell Labs (now Nokia Bell Labs) using Assembly language. Later, major parts were rewritten in the C programming language whose single letter name is much talked about.

BSD (a closed source OS) and its derivatives are the direct descendants of Unix. Unlike its ancestors, FreeBSD, NetBSD, etc. are open source operating systems.

Linux kernel and the distributions based on it live in a different hierarchy. Linux (when tagged as an OS) behaves similar to UNIX and that’s why it is called Unix-like operating system. Linux doesn’t have any direct connection to Unix.

The Linux kernel was created by Linus Torvalds who is still a one man army controlling what goes in and out of the kernel during its development.

Also Read: Difference Between Freeware and Open Source Software

Linux and BSD Hardware Support

Open source operating systems are deprived of proper hardware support. Microsoft Windows and Apple macOS are ones to take the lead in this race. Whether it’s the latest processor or a powerful graphics chip, these proprietary operating systems enjoy the treat way before Linux and BSD.

If we talk about the limited hardware goodies – in comparison to Windows – the Linux-based open source operating systems have an upper hand as they have started to witness a lovely gesture from various hardware vendors. BSD is far left behind and its can’t expect that some fresh chopped salad would be delivered on its table.

GPL and BSD License

Another main difference between Linux and BSD is the license they are protected with. Linux comes under the Free Software Foundation’s GPL (GNU General Public License). The operating systems based on BSD are licensed under the BSD License (known as FreeBSD License).

GPL promotes Richard Stallman’s thinking that software should be made free, in the sense of freedom, by making it accessible to everyone. That’s why GPL makes it mandatory for a person to release the source code to the public if he/she uses the license.

BSD License, on the other hand, doesn’t make it compulsory to disclose the source code. It is up to the creator whether he/she wants to make the code open source or not.

“If it ain’t broke, don’t fix it.”

BSD believes in this thinking. It is rarely possible that any ultra-modern feature will appear on BSD until there is a need to do so. For Linux, some distributions try to include the best and latest.

Also, that’s the reason why BSD operating systems are considered as reliable and stable.

Winding Up

linux-academy-coursesIf I talk about the common public, Linux is more visible on their machines in comparison to FreeBSD. That’s because FreeBSD requires a user to be more tech savvy and Linux has better hardware support. Another reason might be the larger number of people supporting Linux over BSD.

BSDs are better known for their reliability and find their place on server machines and embedded systems. Also, BSD has the capability to run binaries designed for Linux but the reverse is not true.

It’s hard to point one of the two as better because both of them have their own set of pros and cons.

If you have something to add, tell us in the comments below.

Also Watch: Which Linux Distribution Is Best For Me? 

Internet of Things Hackers Target Anti-Spam Service—And Fails To Take It Down – Motherboard

A hacker who last month claimed to have created a new massive army of hacked Internet of Things devices is attacking the anti-spam organization Spamhaus with a distributed denial of service (DDoS) attack. But, at least for now, he’s failing to take it down.

The hacker, who goes by the moniker BestBuy, told Motherboard on Tuesday that he wanted to send a message to Spamhaus, accusing it of being an organization of “blackmailers.”

Read more: The Looming Disaster of the Internet of (Hackable) Things

“Spamhaus are fucking us over everywhere,” he said in an online chat on Twitter. “They put their nose where it does not belong […] We are fucking pissed.”

The attack is a reminder that despite it being almost two months since one of the worst DDoS attacks ever, carried out with the Internet of Things botnet Mirai, cybercriminals are still using it to carry out attacks, seemingly unfazed.

“Spamhaus are fucking us over everywhere […] We are fucking pissed.”

The hacker said that at the beginning at least the attack wouldn’t be too strong, “just a ‘ping,’ something like ‘hi, we see what you are trying, stop.’” BestBuy didn’t elaborate too much on why he was so angry at Spamhaus, only saying Spamhaus was shutting down some of his server’s IPs.

BestBuy claimed that he was using only 200,000 hacked Internet of Things devices for the attack, but he would increase the count to 800,000 if Spamhaus didn’t go down. The hacker was using the bots to target Spamhaus’ DNS servers.

“DNS attack is small but they will get 5~TBps soon or later if they don’t crawl back into their little hole,” he said.

A spokesperson for Spamhaus, which was victim of one of the largest DDoS attacks ever in 2013, said on Wednesday that the attack appears to be ongoing, “but it’s difficult to tell because it’s not really doing anything to us…our services are working fine.”

“We have just a handful of reports that some users have trouble reaching our website/DNS from some networks,” he added.

“It’s difficult to tell because it’s not really doing anything to us.“

An independent security researcher that goes by the name 2sec4u, who has been tracking Mirai botnets and attacks for weeks, confirmed that BestBuy’s botnet was attacking Spamhaus on Tuesday.

“They’re being DDoSed for sure,” 2sec4u told Motherboard in a Twitter message.

MalwareTech, who’s also been tracking Mirai along with 2sec4u, said that it’s unclear how strong the attack is—only Spamhaus can reveal that—but BestBuy’s botnet is made of one million bots, though the hacker “never utilizes all bots.”

The researchers said that it appears Spamhaus is withstanding the barrage, but they had to update their DNS from a mix of providers to only Amazon cloud services.

Another day, another massive attack carried out thanks to the Internet of (Hackable) Things.

Get six of our favorite Motherboard stories every day by signing up for our newsletter.

Google Updates Internet of Things Platform – RTInsights – RTInsights (press release) (blog)

Updates are designed to help developers build smart devices using Android APIs and Google services and connect them to the cloud.

Google announced today that they have made two major updates to their IoT developer platform,  both designed to make creating IoT products easier.

The first is a Developer Preview of Android Things, which is a full-featured tool for building IoT products with the Android OS baked in, the company stated in their announcement. A developer can now build a smart device using Android APIs and Google services and that device will be highly secure and get direct updates from Google.

Google developed Android Things based on the feedback they received about Project Brillo and consider it an improvement.

“Brillo was to create a much more lightweight version of Android for developers,” a Google rep told Mashable. “When we granted early access to external partners, we realized that most people developing for smart devices didn’t really have a need for a lightweight version of Android … they wanted the full Android developer experience and signature features. Especially for connected devices throughout the world, security should be a major priority in building for this community.”

The preview incorporates Android Studio, the Android SDK, Google Play Services and Google Cloud. The company said they plan to provide even more updates in the coming months focusing on security, built in Weave connectivity, and more. Turnkey hardware available for use with Android Things includes the Raspberry Pi 3, Intel Edison and NXP Pico.

Google said they are also updating the Weave platform. The updates are designed to make it easier for all types of IoT devices to connect to the cloud and interact with tools like Google Assistant. Philips Hue smart lights and Samsung SmartThings already use the platform, and the company said other major brands like Belkin, Honeywell and First Alert are currently working on implementing it.

Weave’s SDK supports smart building devices like lights, thermostats, smart plugs and switches, and Google says they plan to add support for even more types of IoT devices and a mobile app API for Android and iOS. They are also working on merging Weave and Nest Weave.

Interested developers can visit the Android Things, Weave, and Google Cloud Platform sites for documentation and code samples.


Smart cities

Amazon's New Feature Aims to Lure Big Companies to Its Cloud – Fortune

Amazon’s quest to be a one-stop-shop for corporate customers continues.

The retail giant’s cloud computing arm, Amazon Web Services, said Monday it has built a new feature called AWS Managed Services. Instead of merely offering just storage and computing resources on demand, the new feature lets customers offload to AWS the mundane legwork required to operate and manage corporate software infrastructure.

Prior to this move, a company’s IT workers would need to do routine maintenance on software infrastructure to ensure operations were running smoothly. Some of this work might involve tasks like patching and updating various software services, ensuring that any changes made to various corporate software don’t adversely affect other services, setting up the hardware that developers can build software on top of, and monitoring all the infrastructure to guard against bugs and security breaches.

Get Data Sheet, Fortune’s technology newsletter.

All that IT operations work can be a chore, even when businesses buy their cloud computing resources on demand. Companies may get speedy access to virtual servers on the fly, but that doesn’t mean they can forgo setting up accounts, updating software, and other operational tasks.

Amazon’s new service aims to remove a lot of the IT operations gruntwork that staff traditionally deal with through various automation technologies and an undisclosed number of behind-the-scenes Amazon employees working on corporate accounts, according to an Amazon blog post.

The company said will likely be used by Fortune 1000 and Global 2000 companies, and that it’s “designed to accelerate cloud adoption.” The retail giant, like other companies including Microsoft msft and Google goog , stand to make lots of money if businesses continue to stop buying their own data center hardware and instead run their corporate infrastructure on its own cloud computing service.

Still, Amazon’s new managed service puts the company in direct competition with other third-party managed services companies that are also its partners, as technology analyst Kurt Marko noted on Twitter.

Companies like 2nd Watch, Datapipe, and Cloudreach sell similar IT management services specifically to help companies more easily manage their cloud computing infrastructure.

Cloudreach head of commercial strategy Andre Azevedo acknowledged in a blog post the perception that Amazon’s new service could impact cloud-centric managed service providers, but said that as long as these providers can differentiate themselves from what Amazon is selling, they should be fine. Amazon’s new service only deals with managing operations across infrastructure services like storage and computing. It does not include the management of databases or so-called middleware, essentially the software glue that connects various corporate software together.

Azevedo also wrote that he believes the new service is not designed for small companies or startups like home-sharing service Airbnb, which built its business on AWS.

“This is very much geared towards large enterprises who intend to move significant amounts of infrastructure to the cloud,” wrote Azevedo.

Amazon and VMWare Plan to Announce New Partnership

With the new service, Amazon is pushing hard to convince corporations to abandon their existing data centers and move everything into its own infrastructure. Rumors of the new Managed Service emerged in April, which prompted Fortune‘s Barb Darrow to compare Amazon’s move to compete with its partners as similar to what companies like Microsoft and IBM have done in the past.

In December, Amazon unveiled Snowmobile, essentially a giant U-Haul truck that can transport enormous quantities of corporate data into an AWS data center. Instead of having to waiting long periods of time transporting data across the Internet, companies can just ship all that data to Amazon directly with a big truck.

Amazon said Managed Services is available today and its price is dependent on how many AWS computing resources a business consumes. Customers can use the new service to manage their AWS resources in various Amazon data center regions like Northern Virginia, Oregon, Ireland, and Australia, with more coming down the road.

Sigfox and Thinxtra to Launch Internet of Things Network in Hong Kong – Yahoo Finance


Sigfox, the world’s leading provider of connectivity for the Internet of Things (IoT), and Thinxtra, the Sigfox Operator in Australia and New Zealand, today announced an agreement to roll-out Sigfox’ IoT network in Hong Kong SAR (Special Administrative Region) in 2017.

The Hong Kong SAR Government has addressed the importance of IoT in its agenda. The 2014 Digital 21 (the blueprint for Information Technology and Communications (ICT) development) has recognized IoT as one of the latest technologies Hong Kong should adopt and champion, and the 2015 Policy Address introduced a new initiative “Energizing Kowloon East” aiming to carry out a pilot study in that district to examine the feasibility of developing a Smart City.

The IoT space opens up new and exciting opportunities by connecting the physical world to the Internet. In just five years, the company has built a global wireless network that provides a simple, efficient connectivity solution, enabling devices to connect to the cloud at ultra low-cost and using minimal energy. Sigfox’s network is now present in 28 countries and on track to be in 60 by 2018, which will represent over 80% of the world’s growth domestic product.

“Hong Kong as an innovation hub and technology center, and as the gateway to China, will benefit greatly from the most advanced and mature global IoT network, and developing the local ecosystem will benefit all the other countries in the region. We look forward to extend our partnership with Thinxtra, our operator in Australia and New Zealand”, says Rodolphe Baronnet-Frugès, Sigfox executive vice president networks and operators.

“We are delighted to expand our network to Hong Kong market together with Thinxtra, and we are confident that together, we will strengthen Hong Kong government’s IoT” said Roswell Wolff, Sigfox’s president, Asia Pacific.

Thinxtra was the first Sigfox operator to extend the network in Asia Pacific and has achieved rapid coverage, having rolled out the network across 65% of the Australian and 80% of the New Zealand population in just 8 months. Earlier this year, Thinxtra has announced an engagement with Silicon Controls for 1M connections, as well as a memorandum of understanding (MoU) signed between the State Government of South Australia and Sigfox, for full state roll out of the Sigfox network across South Australia.

Murray Hankinson, Thinxtra Managing Director Asia said, “Thinxtra is the pure play LPWAN IoT service provider in Australia and New Zealand and will soon be in Hong Kong, supporting the city’s vision for IoT and Smart City. We have proven that we have a winning team that lives and breathes IoT, a world leading secure purpose built technology in Sigfox and that we achieve great things quickly and efficiently working with local partners and businesses. We are planning to replicate this winning formula in Hong Kong. This experience and forward thinking will serve us well in bringing the most mature IoT network to Hong Kong – territory-wide. We are excited by the opportunity to harness local talent and the technological community to expand the Sigfox ecosystem in support of Hong Kong’s local & global applications. We are already open for business, having recently set up our Solutions business in the HKSTP to foster and promote the IoT device design, solution, and manufacturing industries.”

About Sigfox:

About Thinxtra:

View source version on

Cortana's coming to robots and smart devices via Windows 10 for the Internet of Things – PCWorld

In 2017, your new fridge might be able to tell you corny jokes and serenade you, but still lack a witty answer to “Are you Skynet?” Microsoft announced during the WinHEC coference in Shenzhen, China that Cortana would arrive on Windows IoT Core as part of the Windows 10 Creators Update. The IoT version of Windows is designed for smart devices such as robots, maker projects, thermostats, toasters, doorbells, and picture frames.

Windows IoT Core can sometimes lag a little behind Windows 10 PC releases so it’s not clear when the IoT version will roll out. During the Windows 10 November Update in 2015, the IoT build lagged by about a month, though the earlier Anniversary Update arrived around the same time on both PCs and IoT devices.


Cortana’s coming to your fridge.

The impact on you at home: If I was in a betting mood I’d expect to see the new Windows IoT Core inspire multiple Cortana-powered smart speakers from third parties. Microsoft doesn’t appear to have any interest in making its own smart speaker. Instead the company is sticking to its standard the PC-is-the-solution-to-everything approach with Home Hub. That makes a certain amount of sense, however. As we’ve noted before Microsoft is backing away from hardware under CEO Satya Nadella save for the flagship Surface line. An army (or at least a platoon) of Cortana smart speakers would likely benefit Microsoft more than a single “Surface Echo” to take on Amazon and Google.

Bolts are ready just add the nuts

To that end, Microsoft is bringing the features to Windows IoT Core that you’d need for a Cortana-powered speaker. In addition to Cortana itself, Microsoft is also adding far-field speech communication and wake-on-voice functionality to Windows 10 IoT and the PC version of the Creators Update.

Far-field speech will enable Cortana to recognize voice commands from up to 13 feet away, and wake-on-voice allows users to activate a device with the “Hey Cortana” command.

Microsoft hopes to see the first Cortana-powered IoT devices roll out in late 2017 around the back-to-school or holiday shopping seasons. If Microsoft and its device partners hit that target we could see Cortana-powered devices show up as early as Computex 2017 next May.

To comment on this article and other PCWorld content, visit our Facebook page or our Twitter feed.

Nearly Two-Thirds of Americans Own at Least One Internet of Things Connected Device, with 65% Reporting They Are … – Business Wire (press release)

NEW YORK–()–IAB (Interactive Advertising Bureau) today released “The Internet of Things,” a study which shows that nearly two-thirds (62%) of American consumers own at least one Internet of Things connected device (connected car, connected/smart TV, fitness tracker, home control system or appliance, internet-enabled voice command, smart glasses, smart watch, VR headset, or wearable)—and 65 percent of them say that they are willing to receive ads on IoT screens. Sixty-two percent of them already report having seen an ad on an Internet of Things connected gadget. The study also reveals that IoT owners are likely to be parents ages 18-34, with college educations and household incomes above the national $50K average.

The report, conducted by MARU/VCR&C and surveying over 1,200 U.S. adults, shows that an overwhelming majority (97%) have heard of these types of connected devices and 65 percent of those who have yet to buy are interested in purchasing one. More than half (55%) of U.S. adults—whether IoT device owners or not—say that they would be willing to see ads on these devices in exchange for an offering from a marketer, such as a coupon (44%), extra features (30%), or access to exclusive games (19%).

The most popularly owned IoT devices are connected/smart TVs and streaming devices (47%), followed by wearable health trackers (24%) and internet-enabled home control devices (17%).

For consumers considering an IoT purchase, connected/smart TVs and streaming devices were the top choice (39%), followed by:

  • Connected cars (37%)
  • Wearable health trackers (32%)
  • Internet-enabled home control devices/systems (31%)
  • Internet-enabled voice command systems (31%)
  • Internet-enabled appliances (30%)
  • VR headsets (30%)
  • Smart watches (27%)
  • Smart glasses (21%)

“Vigorous growth in familiarity and IoT usage is fueling interest among consumers—and brands need to pay attention,” said Patrick Dolan, Executive Vice President and Chief Operating Officer, IAB. “To access the coveted IoT audience that is already open to receiving ads on their devices, advertisers need to consider ‘added incentives’ for their messages. As adoption continues and marketers learn to weave the Internet of Things into their strategies, tomorrow’s prospects for IoT as a marketing platform will be very bright.”

To download the complete study, please go to


The research was conducted among MARU VCR&C’s Springboard America online panel (~250,000 U.S. members) using an online survey. 1,200 U.S. representatives ages 18-74 participated between August 3-8, 2016.

About IAB

The Interactive Advertising Bureau (IAB) empowers the media and marketing industries to thrive in the digital economy. It is comprised of more than 650 leading media and technology companies that are responsible for selling, delivering, and optimizing digital advertising or marketing campaigns. Together, they account for 86 percent of online advertising in the United States. Working with its member companies, the IAB develops technical standards and best practices and fields critical research on interactive advertising, while also educating brands, agencies, and the wider business community on the importance of digital marketing. The organization is committed to professional development and elevating the knowledge, skills, expertise, and diversity of the workforce across the industry. Through the work of its public policy office in Washington, D.C., the IAB advocates for its members and promotes the value of the interactive advertising industry to legislators and policymakers. Founded in 1996, the IAB is headquartered in New York City and has a West Coast office in San Francisco.

Modernizing HIPAA: Cloud Computing and Mobile Devices – Lexology (registration)

[unable to retrieve full-text content]

Modernizing HIPAA: Cloud Computing and Mobile Devices
Lexology (registration)
The Office of Civil Rights (OCR) of the US Department of Health and Human Services (HHS) recently released guidance on cloud computing that allows entities covered by the Health Insurance Portability and Accountability Act (HIPAA) to take advantage of …

CrateDB: The IoT and machine data-focused database – Network World

There’s been a whole bunch of conversation in the database world in recent years around what the best type of database is for modern applications. Over the past couple of years this has mainly centered around the SQL verses NoSQL wars.

On the one hand are the traditional SQL-based databases, which all follow a traditional row and column format. These are the databases that have existed since pretty much year dot and have proved themselves to be good all-around tools.

+ Also on Network World: IT wants (but struggles) to operationalize big data +

With the advent of social media and the need for database approaches that worked well within the unstructured data landscape that these properties work within has led to the rise of the NoSQL databases. These databases don’t follow, or at least don’t only follow, the standard tabular approach towards data. Hence storage and retrieval of data doesn’t follow the rigid row and column, tabular approach.

However, in recent times, we have seen the SQL versus NoSQL discussion broaden somewhat, and people are looking to the actual usage patterns for the database and thinking about particular requirements.

Cortana gets IoT integration, support for third-party skills – PCWorld

Microsoft’s Cortana virtual assistant is getting a lot smarter. On Tuesday, the company announced a set of developer tools aimed at bringing it into the internet of things, and adding support for developers to build new functionality for it.

The makers of IoT devices like speakers and cars will be able to use a Microsoft software development kit to integrate Cortana into their products. In addition, developers will be able to build custom integrations that add capabilities to Microsoft’s virtual assistant.

Microsoft is also launching a new service designed to help users simplify the process of scheduling meetings. Cortana will help find openings on a user’s calendar and work with meeting participants to find a time that works for everyone.

The new features are part of the company’s continuing investment in its virtual assistant. Microsoft is competing heavily against Google, Amazon, Apple and other companies to become the company that provides users with an intelligent assistant.

Amazon’s popular Alexa virtual assistant, which is best known as part of the Echo speaker, can also be integrated into IoT devices and built to work with external services. Google recently announced that its assistant will also be able to integrate with other companies’ data as well.

Microsoft is working with Harman Kardon to create a “premium home speaker” with Cortana built in, coming to the market in 2017. That speaker should help Microsoft better compete with Google Home and Amazon Echo.

Microsoft is also working on integrating Cortana into connected cars, said Marcus Ash, partner group program manager for Cortana. More announcements about Cortana integration are forthcoming, but Ash said those will be driven by Microsoft’s hardware partners.

Capital One will be one of Microsoft’s launch partners for the Cortana Skills Kit, which will let companies integrate their services with the virtual assistant. The financial company already offers a skill for Alexa, and the Cortana functionality will let users manage their money by doing things like checking their balance, paying bills, and seeing how much money they spent at particular merchants.

Google launches Android Things, a new OS for IoT gadgets – The Next Web

Google has launched Android Things, an IoT platform that lets you build connected devices while leveraging Android APIs and the company’s cloud-based services for delivering updates and enabling voice commands.

The company says it’s combining Brillo, its previous Android-based IoT OS, with tools like Android Studio, the Android SDK, Google Play Services, and Google Cloud Platform, to make it easier for developers to build smart devices.

Europe’s leading tech festival

TNW Conference is back for its 12th year. Reserve your 2-for-1 ticket voucher now.

Google is also working to add support for Weave, its IoT communications platform that helps devices connect to Google services for setup and to talk to other gadgets, as well as Google Assistant.

Philips Hue and Samsung SmartThings already use Weave; Google says that other major manufacturers like Belkin WeMo, LiFX, Honeywell, Wink, TP-Link, and First Alert will also back the platform. Plus, Nest’s Nest Weave will also be merged into Weave so all the connected devices from these brands will be able to work with each other.

In addition, you can begin building products using supported hardware development kits like Intel Edison, NXP Pico, and Raspberry Pi 3.

With that, Google hopes to lead the charge in the burgeoning IoT platform space. Although there are several alternatives available to developers, the company’s offering, with access to its cloud services, might seem more compelling than the rest.

Android Things is currently in Developer Preview; you can get started here. The company will also work with Brillo users to help migrate their projects to the new platform.

Announcing updates to Google’s Internet of Things platform: Android Things and Weave
on Android Developers Blog

Read next:

Why it pays to be a parallel entrepreneur

Global IoT Real Time Operating Systems (RTOS) Market Analysis and Forecasts, 2022: RTOS is a Critical Component … – PR Newswire (press release)

DUBLIN, Dec 14, 2016 /PRNewswire/ —

Research and Markets has announced the addition of the “Real Time Operating Systems (RTOS) for IoT: Market Analysis and Forecasts 2017 – 2022” report to their offering.

A Real-time Operating Systems (RTOS) is an OS that manages hardware resources, hosts applications, and processes data on real-time basis. RTOS defines the real time task processing time, interrupt latency, and reliability of both hardware and applications, especially for low powered and memory constrained devices and networks.

This research provides an in-depth assessment of the RTOS embedded IoT system market including following:

For more information about this report visit

Media Contact:

Research and Markets
Laura Wood, Senior Manager

SOURCE Research and Markets

46bn IoT devices by 2021 as sensor market surges, research says – NFC World

The number of connected Internet of Things (IoT) devices and sensors will reach more than 46bn by the year 2021, a 200% increase from 2016, with the global sensors market generating revenues of US$162.36bn on its own by 2019, two research reports from Juniper Research and Frost and Sullivan reveal.

Juniper Research and Frost and SullivanThe rise in the number of IoT devices will be driven in large part by a reduction in the unit costs of hardware, Juniper says, with industrial and public services posting the highest growth over the forecast period, averaging more than 24% annually.

“The platform landscape is flourishing”, says Steffen Sorrell, the author of Juniper’s latest research The Internet of Things: Consumer, Industrial & Public Services 2016-2021. “However, analytics and database systems are, for the most part, not architected to handle the Big Data 2.0 era that the IoT brings.”

‘Woeful’ security

Juniper says disruption is needed in areas such as spatio temporal analytics and intelligent systems able to run on less powerful machines.

However, it warned the “security threat landscape is widening”. While enterprise and industry are investing heavily in IoT security, the consumer market landscape is “woeful”, Juniper adds.

Meanwhile, Frost and Sullivan’s Global Sensor Outlook 2016 report says industrial control, smart cities and e-health will be the top contributors to the US$162.36bn revenue in the sensors market by 2019.

4 Billion IoT Devices Will Rely on LPWAN Technologies by 2025, Ecosystem Creation Matters – ABI Research (press release) (subscription) (blog)

London, United Kingdom – 14 Dec 2016


With four billion IoT devices expected to rely on Low Power Wide Area Networks (LPWANs) by 2025, ABI Research predicts that this technology will be the fastest growing connectivity segment in the market through 2025. The rise of LPWANs will translate into one billion chipset shipments with the technology generating a total value of more than $2 billion in 2025.

“The success, or otherwise, of different LPWAN technologies at stake will much depend on the market they are targeting case by case,” says Samuel McLaughlin, Research Analyst at ABI Research. “Regardless of the targeted use case, LPWAN technology suppliers should aim to create solid ecosystems around their technologies by either partnering with service platform providers or building one of their own. Otherwise, they will face serious hurdles in this fast-moving and highly competitive market.”

LPWAN can be split into two main categories; technologies that operate under unlicensed spectrum and those operating under licensed spectrum and using 3GPP standards. Although unlicensed technologies—whether proprietary technologies like SIGFOX or those based on open frameworks like LoRa and Weightless—are gaining a considerable momentum within the IoT market, they will be increasingly challenged by the emerging technologies based on 3GPP standards, notably NB-IoT. 

“LPWAN technologies operating under unlicensed spectrum have the early market advantage and provide the quickest time to deployment, and the lowest infrastructure and operating costs for many IoT applications,” continues McLaughlin. “However, emerging 3GPP LPWAN technologies like eMTC and NB-IoT are promising similar performance and have many more advantages. These include strong support from the telecommunications ecosystem, the ability to operate ubiquitously across the cellular infrastructure already in place, and most importantly, the scalability for service providers to easily and quickly add new services to their portfolios using the same infrastructure.”

While some technologies, mainly those operating in unlicensed spectrum, will continue to perform well in specific segments, notably utility and energy management, as well as in retail applications, other technologies will better suit service providers who want to address many segments using the same infrastructure. Smart cities, smart homes, smart buildings, and industrial IoT applications are prime examples of such segments. Operators including Orange and SK Telecom are deploying various technologies operating in both licensed and unlicensed spectrums with the ultimate goal being to build service platforms that are agnostic to the access technology used. Their aim is to play the various LPWAN technologies at their strengths, depending on the market segment targeted.”

ABI Research finds the utility and energy management market will hold the largest share of the LPWAN market through 2025 due to the fact that the application requirements of smart electricity, water, and gas meters match fundamental characteristics of LPWAN technologies, such as long battery life, wide coverage area, and higher link budget. Moving forward, the market will expand to include best-fit use cases for all LPWAN technologies, with smart street lighting and smart parking applications also forecast to see significant shipments.

These findings are from ABI Research’s Market Opportunities for Low Power and Cellular Wireless ICs for the IoT report. The study provides a comprehensive technology breakdown across nine competing solutions targeting more than 20 use cases.

Global Satellite Internet of Things (IoT) Market: Forecast to 2022 – Dynamic Vertical Markets Such as Oil and Gas … – Business Wire (press release)

DUBLIN–()–Research and Markets has announced the addition of the “Global Satellite Internet of Things (IoT) Market: Forecast to 2022” report to their offering.

While only accounting for 25.5 million units globally in 2015, the satellite IoT market is expected to grow at a faster pace than any other satellite market. With a compound annual growth rate (CAGR) of 19.9%, this market will provide new growth and opportunities for mobile satellite service providers that have struggled with the stagnant development of the satellite phone market.

This research provides a base year market size by active units, future growth estimation through 2022, and forecast breakouts for the 6 most significant market verticals within the satellite IoT market. For the purposes of this study, only S- and L-band satellite technologies are covered as they are the primary frequency bands used for pure-IoT applications.

Key Questions this Study will Answer:

  • How much revenue did the satellite IoT market generate in 2015?
  • How will the satellite IoT market grow over the forecast period?
  • What are the drivers and restraints impacting this market and to what degree?
  • What are the technology trends impacting the satellite IoT market?
  • How are the individual vertical markets growing and what are the key trends?
  • Who are the market participants and what does the competitive landscape look like?

Key Topics Covered:

1. Executive Summary

2. Market Overview

3. Drivers and Restraints-Total Satellite IoT Market

4. Forecasts and Trends-Total Satellite IoT Market

5. Market Share and Competitive Analysis-Total Satellite IoT Market

6. IoT Growth Opportunities

7. The Last Word

8. Appendix

For more information about this report visit

IoT And The Bump From Trump Spend –

Cassia Networks Launches Groundbreaking IoT Enterprise Solution – Business Wire (press release)

SAN JOSE, Calif.–()–Cassia Networks, the company behind the world’s first Bluetooth router, announced today a new suite of IoT products for enterprise applications. This product suite, which includes enterprise-level Bluetooth routers, an IoT access controller and a developer SDK, will solve many of the challenges of enterprise IoT adoption and development.

“The ability of the enterprise to easily use IoT environments in business operations is imperative to the success of the IoT market as a whole,” said Cassia Networks founder and CEO, Felix Zhao. “Despite the headway made on the consumer side of the market, enterprises have been unable to truly reap the benefits of a connected enterprise without great implementation costs. By enabling scalable communication and control over IoT environments at an affordable price, Cassia Networks stands to change this paradigm.”

Until now, lack of standardization and interoperability issues across wireless technology have been a major obstacle to IoT market growth. By improving the functionality of Bluetooth so that it can work across greater distances and a wide variety of products, Cassia Networks is backing Bluetooth as the ubiquitous wireless technology. In doing so, the company is solving two of the most fundamental barriers to IoT market entry – the cost and difficulty of deploying large-scale IoT environments that work.

With the Cassia IoT Access Controller (AC), businesses will now have unprecedented access, control and security over their IoT environments. The Cassia IoT AC solution enables seamless deployment and management of hundreds of Bluetooth routers connected to thousands of devices in an enterprise environment from one centralized interface. Other advanced features include: security policy management, Bluetooth locationing, seamless Bluetooth roaming and dynamic load balancing.

Even when Bluetooth emerges as the ubiquitous wireless technology, device makers and customers still face interoperability issues between products and software profiles. With the Cassia SDK, developers and device manufacturers can incorporate the Cassia SDK into their native app or server software for seamless integration with the Cassia routers and AC. This simple deployment process allows developers to connect any Bluetooth low energy product to the Cassia enterprise product suite without having to change the software and hardware of the end devices.

The Cassia enterprise product suite includes three Bluetooth routers – the S1000, S1100 and X1000. With a compact, cost-effective design, the S1000 and S1100 (Power Over Ethernet version) routers are optimized for indoor applications and can be mounted on a wall or ceiling, or simply placed on a desk or counter. The X1000 can be used in both indoor and outdoor environments and offers enhanced functionality. All three routers can act as an internet gateway working with the Cassia IoT AC so that users have remote access and control of their end devices.

Key benefits of the Cassia enterprise product suite include:

  • Seamless Bluetooth coverage for cost effective IoT deployments
  • Easy deployment and management of hundreds of routers and thousands of devices through the centralized AC interface
  • Bluetooth locationing for precise tracking of people and assets
  • Enhanced end-to-end security
  • Lower barrier of entry and improved ROI for enterprise IoT deployments

“From logistics to the factory floor, from sports arenas to hospitals, the IoT is on the cusp of radically changing how many industries operate,” added Zhao. “Together, our groundbreaking solution will open the door for enterprise IoT applications and environments that have not been realized before. We’re excited to bring this new frontier to the enterprise.”

To learn more about the company’s new enterprise product suite, visit:

About Cassia Networks

Cassia Networks builds next generation IoT solutions that are transforming how businesses and consumers experience IoT environments. By extending the range and functionality of Bluetooth, our solutions solve the key challenges of wireless connectivity and unlock the power of the IoT for all. Led by serial entrepreneur and wireless technology veteran Felix Zhao, Cassia is committed to building groundbreaking IoT products that enable IoT environments that work.

Learn more at: or follow Cassia on Facebook and Twitter.

1 2 3