Categories
Uncategorized

Feelix Appstore Privacy Policy

Privacy Policy

This policy applies to all information collected or submitted on the Feelix App iPhone and any other devices and platforms.

Information we collect

We collect the information you save in the Feelix App but the data does not leave the phone. So effectively if you can see the data in the phone, that’s the data being collected. But it isn’t going any further

Ads and analytics

Feelix doesn’t track your usage, not show you banner ads

Information usage

We do not have access to your data.

We do not share personal information with outside parties (because we don’t have access to it).

Accessing, changing, or deleting information

All data is held locally within the Feelix App. If you wish to delete this data, clear the app storage settings. No data is transmitted externally

Third-party links and content

Third party links and content are not part of the Feelix App

Information for European Union Customers

Feelix stores all of it’s data locally. No data is transmitted outside of the phone itself

Your Consent

By using our site or apps, you consent to our privacy policy.

Contacting Us

If you have questions regarding this privacy policy, you may email feedback@pressedontech.com.

Categories
cloud social watercooler

Hybrid IT vs Hybrid Cloud – a changing landscape

I recently had the opportunity to speak at a CxO peer to peer networking event, run by Gartner.  I was on stage with a leading Finance Sector CTO, who described how he is deploying new systems into Azure and he’s enjoying the flexibility that brings. However he has legacy (auto corrected himself to “traditional”) workloads that due to the nature of the application will always reside within the data centre.  He also said without prompting that if you think the cloud is cheaper then you’re in for a surprise.

That story resonated with anecdotes from other customers.  The appeal of the hyperscale public cloud is it’s flexibility and speed to market.  However with most things in life, anything you rent is always going to end up being more expensive than anything you buy.  In addition, at the inception of the public cloud paradigm there were some assumptions around only running applications when you need them and therefore not paying for unused resources.  Which is fine in a research paper, a lab or a startup with only 1,000 customers.  But start adding real people (especially non-IT folk) to that mix, and overlay existing business processes into that mix – most decent sized enterprises experience is that switching things on and off isn’t realistic for a good proportion of the applications they use.

In context, I think the landscape is maturing somewhat.  There used to be a narrative that everything is agile and due to a variety of factors (security, latency, etc.) you want the public cloud agility but those factors constrain you.  So why not use OpenStack / Docker / Stackato / Vmware to have a private cloud for the best of both worlds???  Let your applications magically float between clouds. That has been the common hybrid cloud story for the past few years.

Right now I’m hearing more people talk about not everything needing to be Netflix.  Some new workloads are public cloud native and need the agility and flexibility it provides (along with associating it to a specific P&L).  But some things just don’t need that flexibility (and specifically those things are commonly running the majority of the business’ revenue driving workloads).  In addition newer workloads such as AI where the maths is so compute intensive that dedicated on-prem GPU accelerated infrastructure is the preferred platform once things get past R&D.

Today’s story is becoming less Hybrid Cloud where apps move between different environments, and more Hybrid IT where different platforms have different benefits and those are the decision points around where to host them

Categories
social

More container talk for HPE Cloud 28+

More container talk for HPE Cloud 28+

 

Categories
social

Real World Uses for Containers

I spoke to UKFast on 7th March about some of the business benefits customers see with from container technologies

 

Categories
social

In Memory Compute

I spoke with UKFast on 14th March about In Memory compute instead of disk based storage and the performance benefits this brings

Categories
social

Predictive Analytics

I spoke with UKFast on 14th March about Predictive Analytics

Categories
cloud

Simon and Rob discuss AWS uptime

Hi Rob,

 This is not important just some points I was thinking about during the team call this morning. You always have an interesting point of view so I thought I’d share and get your feedback.

In particular the discussion about how long AWS would need to survive without another outage in order to create 5x9s availability. It occurred to me that it was a completely fatuous measurement because I believe we’re at that Dev/Ops cross roads where we can no longer depend on hardware to save us from data loss. Looking at the AWS Service Level Agreement it basically states that “we won’t charge you if we can’t provide a service” to quote them directly they say “AWS will use commercially reasonable efforts…” which is not a commitment to service that I’d like to have my salary dependent upon.

Over the past 20 years or so there has been a creeping paradigm of hardware resilience for example RAID on disks, remote replication, clustered failover, hardware error checking and many other such technologies that have allowed developers to ignore resilience in their systems. But in the world of cloud, all of the resilience that has been baked into hardware is now gone. If you don’t own your own datacenter and haven’t ensured resilience and failover capability then, as a developer, you’d better start thinking about it again because your service provider isn’t go put it in, it would make their whole charging model unsupportable.   

There is evidence that developers are already thinking about this and have done something about it. I’m probably not on the bleeding edge of ideas here. For example one of my bug bears with Docker is its lack of state. But in an environment where there isn’t any resilience it makes absolute sense not to preserve any state, if it doesn’t have a state then you can start another session on another piece of hardware anywhere you like with minimal disruption to your applications. Complete statelessness is hard to achieve, so somewhere there has to be some DB which needs hardware resilience.

I think my point really is that we have to fundamentally change our thoughts about the way systems work. Talking about 5x9s resilience is no longer relevant.

 

Comments, thoughts, ideas?

 

so I  replied:

Hi Simon

I’m seeing a couple of data points as the market view on public cloud matures.

  • It’s not cheaper.
  • It’s an opportunity risk

There’s a growing market for tools and professional services for public cloud to help customers constrain their costs.  I was amused to hear that Amazon itself manages a 2nd hand market on reserved instances (RI’s) where customers can resell their unused RI’s to other AWS customers to recoup some of those costs.  And there are consultancies that help customers either buy those RI’s from the marketplace at the lowest price or sell their RI’s at the highest price.

Seems to conflict with the idea that cloud is cheap.

The opportunity risk piece comes from the fact that organisations can implement products faster with public cloud without going through the CAPEX procurement cycle.  And if those new products or services fail then they aren’t left with infrastructure they don’t need.  So cloud isn’t a cheaper way to deliver infrastructure.  But a faster way to deliver it in the short term.

How is that related to your point above?  I think it’s examples of how the perception of public cloud is changing.  I personally like to take the long view.  When TV’s were first introduced they would be the death knell for the cinema.  When the internet came around it would be the end for books and newspapers.  The introduction of new technology always predicts the supercedence of existing tools.  Don’t get me wrong; CD’s are no more  (although interestingly vinyl lives on), and newspapers are in a dwindling market.  But the cinema experience is fundamentally different to the TV experience.  The book experience is different to reading websites. 

Those examples show that different technologies enable us to consuming things in different ways.  There’s no doubt the x86 server market is in decline.  But that’s from a position where 100% of technology is on premises.  As we learn more about the benefits of public cloud we learn that it also has drawbacks in the way it’s been implemented.  I therefore believe that a balance will be found where some workloads will necessitate being within a customer’s physical location.  And some will benefit from being hosted with a hyper-scale cloud provider. IT will be consumed in a different way.  I don’t believe on-premises infrastructure will go away though.

I childishly take some glee from AWS outages.  I take umbrage from the assertion that Infrastructure is irrelevant and the developer is the new king maker.  Software Development is hard.  Infrastructure is hard.  So when AWS or Azure or AWS (again) has an outage it brings that reality into stark relief.  The glee I get is from the humble pie being eaten by so called pundits who portray cloud as the next evolution of technology when really it’s the emperor’s new clothes.  It’s somewhat unfair to expect developers to be experts in how to make their code run efficiently, be well documented, modular, deliver services to market quickly – and then also be experts on high availability, disaster recovery, data lifecycle management, etc. etc. etc.  Both skills are important.  But are fundamentally different.  To expect public cloud to be able to deliver the same service without the same level of expertise is patently ridiculous.  It won’t achieve 5x9s.  But then I don’t believe it should.  It’s something different.  And should be architected and understood as such.

Tying the various threads together, I strongly believe there is a use case for agile, devops style methodologies.  And an agile infrastructure to support it.  But I firmly believe that one size doesn’t fit all and infrastructure architects and expertise has a strong role to play in conjunction with software development expertise to deliver the right solutions for the future.

Well that was a long winded response.  I can see a copy and paste blog post coming J

Categories
cloud

Will containers take over the world

A question from a colleague– do I think everything will go containers?  Our conversation kind of got lost in a conversation around devops and version control.  But we had a good and healthy debate

To try and answer his question , my view is that Containers are a solution to two problems:

  • It’s a business problem experienced by startups in silicon valley who need to iterate functionality quickly.  With a small team they need to get something to market and then keep adding new functionality quickly.  So it’s solving a business problem around small teams, speed to market and the ability to pivot based on a rapidly changing market requirement and competition (uber vs lyft for example).  Containers are a way of supporting that microservices type architecture to facilitate a devops / minimum viable product / continuous integration approach.  Other existing technologies don’t support that requirement as well as containers.
  • It’s a technology problem experienced by guys who are using public cloud wherein the infrastructure is typically crap and unreliable.  So the ability to quickly deploy lots of copies of your app enables you to mitigate that crap infrastructure.  Application portability is a problem that legacy technology platforms don’t really answer very well (hard coded / embedded IP addresses say “hi”)

 

For those problems, containers are great.  But that’s not every use case.  Banks selling mortgages is a relatively static requirement, heavily regulated.  The bank may take on more mortgages but they don’t have the need to iterate quickly.  That requirement is to have an architecture that supports speed, resiliency and high data processing.  Walmart might need to iterate quickly to get an app to market to differentiate against Target.  But the core requirement to take payments for products in store and for the sale of product to be reflected in the stock replenishment system is a static requirement.

 

For the benefits they bring, they will bring other problems (integration into existing monitoring dashboards for performance, errors, etc is a massive ops issue that devs don’t consider for example).  If an organization can see past wanted to use the latest shiny thing and can match an infrastructure + software solution to their actually need –  I don’t see containers replacing everything.

 

But then maybe I’m an out of date old guy.

Categories
IoT projects

Raspberry PI TFT Screen / USB Bitcoin Miner

Earlier this year I built a Bitcoin mining project with my old Raspberry PI.  Even though it’s the older model, the USB ASIC offloads all of the CPU processing so it’s an ideal project for the older PI (there’s a whole separate thread around whether we really need the extra CPU horsepower in the newer PI’s.  Although a benefit is that the old model B’s will probably get a lot cheaper on ebay)

 IMG_20160307_085113090

Buyer Beware – I did this about 4 weeks ago and had loads of fiddling to make it work.  I’ve gone back through bash histories to document everything I did.  If I’ve missed anything then that’s my excuse.  But feel free to drop me a line if you’ve tried stuff and it doesn’t work.

I installed a clean version of Jessie to start the project.  But I had to do a fair amount of fiddling to get the screen up and running.   Most of the effort in getting this project working was around getting the screen to a) work, and b) stay on.  As such I thought I’d document it here to see if it helps anyone.

The screen I picked up is from Amazon – described as a “Makibes® 3.5 inch Touch Screen TFT LCD (A) 320*480 Designed for Raspberry Pi RPi/Raspberry Pi 2 Model B”.  The back of the screen says “3.5inch RPi LCD (A) V3 WaveShare SpotPear”.

screen

I think the Makibes thing is a rebrand as most of the google search results for the errors I was getting were brought up the WaveShare screen.  As per the comments in the Amazon page, this link many found helpful.  It got me to the point where I could manually load the modules.  But it didn’t stay persistent over the reboot.  As per the page I just linked to (and I’m just copying and pasting his work here, check out the link for the full info), I could get the screen working with a modprobe

modprobe flexfb nobacklight regwidth=16 init=-1,0xb0,0x0,-1,0x11,-2,250,-1,0x3A,0x55,-1,0xC2,0x44,-1,0xC5,0x00,0x00,0x00,0x00,-1,0xE0,0x0F,0x1F,0x1C,0x0C,0x0F,0x08,0x48,0x98,0x37,0x0A,0x13,0x04,0x11,0x0D,0x00,-1,0xE1,0x0F,0x32,0x2E,0x0B,0x0D,0x05,0x47,0x75,0x37,0x06,0x10,0x03,0x24,0x20,0x00,-1,0xE2,0x0F,0x32,0x2E,0x0B,0x0D,0x05,0x47,0x75,0x37,0x06,0x10,0x03,0x24,0x20,0x00,-1,0x36,0x28,-1,0x11,-1,0x29,-3 width=480 height=320
modprobe fbtft_device name=flexfb speed=16000000 gpios=reset:25,dc:24

If your screen matches the description above and that works, then happy days.  Here’s what I ended up doing to make it persistent post reboot:

First off, if it isn’t already enable SPI in the raspi-config tool

in  “/boot/config.txt” I’ve appended the following lines

Enable audio (loads snd_bcm2835)
dtparam=spi=on
# dtoverlay=ads7846,cs=1,penirq=17,penirq_pull=2,speed=1000000,keep_vref_on=1,swapxy=0,pmax=255,xohms=60,xmin=200,xmax=3900,ymin=200,ymax=3900
dtoverlay=ads7846,speed=500000,penirq=17,swapxy=1
dtparam=i2c_arm=on
dtoverlay=pcf2127-rtc
# dtoverlay=w1-gpio-pullup,gpiopin=4,extpullup=1
device_tree=on

/boot/cmdline.txt passes parameters to the bootloader.  I’ve appended a couple of lines to make the console appear on the SPI TFT screen instead of the default HDMI.  Also, the console blanking is disabled

dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait fbcon=map:1 fbcon=font:ProFont6x11 logo.nologo consoleblank=0

/etc/modules now looks like this:

snd-bcm2835
i2c-bcm2708
i2c-dev

Something I didn’t pick up from other forum posts and blogs is the config required to auto load modules on bootup.  So I created /etc/modules-load.d/fbtft.conf to effectively do what modprobe is doing from the command line

spi_bcm2708
flexfb
fbtft_device

Console Blanking is apparently a bit busted in Jessie, so /etc/kbd/config needs the following settings (they aren’t next to each other in the file so you’ll need to search through it to make both edits:

BLANK_TIME=0
POWERDOWN_TIME=0

And /etc/init.d/kbd needs to look like this (search for screensaver stuff in the file – it’s quite long)

# screensaver stuff
setterm_args=""
if [ "$BLANK_TIME" ]; then
setterm_args="$setterm_args -blank $BLANK_TIME"
fi
if [ "$BLANK_DPMS" ]; then
setterm_args="$setterm_args -powersave $BLANK_DPMS"
fi
if [ "$POWERDOWN_TIME" ]; then
setterm_args="$setterm_args -powerdown $POWERDOWN_TIME"
fi
if [ "$setterm_args" ]; then
# setterm $setterm_args
TERM=linux setterm > /dev/tty1 $setterm_args
fi

That should get you to the point where your Raspberry PI will reboot and then always use the TFT screen as a display output.

To complete the project I used this USB ASIC to do my bitcoin mining.  Amazon is out of stock at the time of writing.  However it will give you what you need to search ebay for, etc.  This instructables is complete enough that there’s little point me replicating it here.  However, there were a couple of additions that I needed to do before it worked and to complete the project.

dongle

First off I needed to install some additional packages:

sudo apt-get install autoconf autogen libtool uthash-dev libjansson-dev libcurl4-openssl-dev libusb-dev libncurses-dev git-core

Download the zip file, build and install the code

wget http://luke.dashjr.org/programs/bitcoin/files/bfgminer/3.1.4/bfgminer-3.1.4.zip
unzip bfgminer-3.1.4.zip
cd bfgminer-3.1.4
make clean
./autogen.sh
./configure
make
sudo make install

The last part of the project is a total hack as I ran out of steam with my enthusiasm.  It’s pretty insecure and absolutely not best practise, etc. etc.  But I got lazy, and it works.  I’m sure you can make something better given a few more brain cells.  First off, make sure the pi boots into console mode and not x windows.(option number 3 in raspi-config).  Then choose option

B2 Console Autologin Text console, automatically logged in as 'pi' user

Then I created  the following script – /home/pi/login.sh

#!/bin/bash
#
/usr/local/bfgminer-3.1.4/bfgminer -o stratum.bitcoin.cz:3333 -O username.password -S all

Every time it boots, the Raspberry PI automatically logs in as the ‘pi’ user.  And everytime the pi user logs in, that script is run.  Totally insecure.  And I’ll probably go back and fix it when I’ve got a spare 30 minutes.  But it works for now

Categories
IoT

Windows 10 IoT Install Guide

What you need to install Windows 10 IoT on the Raspberry Pi 2

First thing to think about it to understand the Raspberry Pi 2 is different to the original Raspberry Pi

IMG_20160104_191018042

 

The form factor is different.  If you have a Raspberry Pi and are upgrading, you’ll want a different case.  Second point is that you are installing a different OS so you’ll need to consider whether your hardware is compatible.  For my Raspberry Pi running Raspian I use an old Buffalo WLI-UC-GN.  I just pulled this out of my spare parts bin when I setup Raspian.  In hindsight I probably purchased the Buffalo card because it was Linux compatible.  The Windows 10 IoT Hardware Compatibility list is here.  From that list I purchased this TP-LINK Wireless USB Adapter and it worked first time for me.

IMG_20160104_191416625

For the case I picked up this case and it has a couple of nice features:

IMG_20160104_191841083

The base is full of holes which helps airflow.

IMG_20160104_192033606

The case also has a middle layer so you can have most of the unit protected but also expose the the expansion ports.  That’s more by accident than design – I was just looking for a cheap case.  But I’m quite happy with what turned up so thought it was worth sharing.

Once you’ve got the hardware ready you’ll want to install the O/S.  For this you’ll need an Micro SD card and the Windows IoT Dashboard.  Go to Microsoft’s Get Started with Windows IoT and scroll down to the “Set up a Windows 10 IoT Core Device” section to download the software.

Fire up the software and the default page is the “Set up a new device”.

page 1

Insert an SD card and click the “set up a new device” button.

page 2

The tool will then download the software and burn the image to the Micro SD card.  Insert the Micro SD into the Raspberry PI and power it on.  You’ll want to be on the same Ethernet network as the Raspberry PI because you configure the device over the network.  The difference between the Windows 10 configuration and the Raspbian configuration is that I can plug a keyboard and mouse into the Linux setup and configure networking from there.  Windows assumes this is a device purely for Internet of Things so all configuration is done over the network from a PC.

Note that the Micro SD card is underneath the PI 2, not on top like the first Model B.

IMG_20160104_191043464

 

After a while you should see the minwinpc appear in the “My Devices” section of the IoT

page 3

Click on the Globe logo under “Open in Device Portal” and it will launch a web browser.  I’m only going off my own experience but whilst I generally like the Microsoft Edge Browser, it didn’t really like the Windows 10 IoT device configuration portal.  Silly things like the buttons didn’t seem to press when I was trying to get it to do things.  Chrome didn’t seem much better.  I had the best results with Internet Explorer 11.  YMMV

As you can see from the previous page, my Raspberry PI was DHCP’d the address 192.168.1.20.   Therefore the login page is http://192.168.1.20:8080/.

Default username is

Administrator

Default password is

p@ssw0rd

Note the ‘@’ and the zero instead of the letter ‘o’

Now you can see the configuration web page where you can setup the Raspberry PI.  If you’re security conscious or running on a LAN with other users, then you should change the default password.  And probably change the name of the Raspberry PI from ‘minwinpc’ to uniquely identify your device. page 4

In the Networking page you can then configure the wireless settings:

page 5

Now you should have a Raspberry PI2 running Windows 10 IOT.  Enjoy