Thursday, January 19, 2017
Sunday, January 1, 2017
- December 27th, 2016: For today's #haskell exercise we expand our inquiry of SAIPE/poverty data by analyzing by US State.
- December 26th, 2016: Today's #haskell exercise looks at partitioning data by each unit's 'size'; then we visualize the partitioned data. Today's #haskell solution shows we can cluster and analyze US counties by their relative sizes
- December 23rd, 2016: There are many (3000+) US Counties. Today's #haskell exercise visualizes them clustered by SAIPE/poverty data. Today's #haskell solution shows the US Counties clustered by SAIPE/poverty data visualized in @neo4j.
- December 22nd, 2016: For today's #haskell exercise we shift focus from SAIPE/poverty data to debt by US State, total and per capita. Today's #haskell solution shows US State debt, totals and per capital, with a caution about the IO monad in a REPL.
- December 21st, 2016: For today's #haskell exercise we look at clustering SAIPE/poverty data and looking at patterns in the clusters. Today's #haskell solution shows US counties fall in 4 clusters using SAIPE/poverty data with Los Angeles a stand out.
- December 20th, 2016: Today we look at enumerating US Counties from SAIPE/poverty data then determining their US State, deterministically. We find in today's #haskell solution that there a a lot of Counties of the US, so we index and enumerate them.
- December 19th, 2016: For today's #haskell exercise we make a set of (static) String values enumerable and indexible. On the shoulders of our previous parsing work, today's #haskell solution parses SAIPE data to index US State names.
- December 16th, 2016: Today's #haskell Exercise looks at score-cards for US Census data
- December 15th, 2016: For today's #haskell exercise, we're back looking at US Census data: SAIPE/poverty by State and county #DataAnalytics. Today's #haskell solution shows how to parse by-line whilst carrying-over structure-context from prior lines
- December 14th, 2016: Today's #haskell exercise we begin to look at qubits and pauli rotations on qubits. Today's #haskell solution represents Qubits and rotates |0> and |1> through the Pauli X operator.
- December 13th, 2016: Today's #haskell exercise looks at the US Census data and asks some by-State questions around incomes. Today's #haskell solution uses Network.HTTP and Applicative Functors to examine populations and mean/median incomes.
- December 12th, 2016: For today's #haskell exercise we observe a dinner-table conversation of a math professor and her husband. Today's #haskell solution shows us happiness for children is having a parent as a math professor. I know this.
- December 9th, 2016: Today is #FF on @1HaskellADay. I mean, that's today's #haskell exercise: Examine Twitter JSON for #FF-analytics. Today's #haskell solution shows us who NOT to #FF if you want the follow back. Doesn't it.
- December 8th, 2016: Today's #haskell exercise comes to you all the way from Smyrna! Construct a word-square. And from 2000 common English words we have #haskell solutions for the 3x3 and 4x4 word-squares
- December 6th, 2016: Today's #haskell exercise will look at EMA/Exponential Moving Averages to analyze trends of, e.g. #BitCoin. Today's #haskell solution inlines state into the EMA-recursive function to analyze #BitCoin price history.
- December 5th, 2016: Today's #haskell exercise we look at the SMA/Simple Moving Average as a trend-estimator for, e.g.: #BitCoin price. Today's #haskell solution uses the SMA-function as a Comonad. Below are 1 year and 3 months of #BitCoin SMA-analyses. CORRECTION! SMA 15 and SMA 50 are industry norms in the markets. Corrected (and generalized) #haskell solution here.
- December 1st, 2016: For today's #haskell exercise we look at a larger data set with a year's worth of BitCoin price history. I love the #haskell standard library. I love that you can write today's solution: #BitCoin prices in one line of code.