TIC-80 is a small fantasy computer that includes a full development environment (runtime + code editor) along with a set of tools to develop sound effects, music, maps, and sprites. Programs are limited to 64kb, and support Lua, along with Lua-hosted languages like Fennel (my preferred language!) and Wren.
Might take a stab at developing a simple dice game (Zilch perhaps?) in TIC-80. One of the draws, beyond simply being able to deploy the game to any platform that runs TIC-80 (browsers, desktops, phones) is that RetroArch bundles TIC-80, so TIC-80 games run on anything that runs RetroArch, including devices like the RG280V: modern handhelds that run retro games. Fantastic gateway to writing portable games as a hobby!
I hesitate to post on 01 April, but here's a (true) embarrassing story: after reading about how fast the M1 Macbooks are, I was conflicted about deciding to avoid them for a while (I like to stay a bit behind the curve when system architecture is changing). I switched jobs in March and they said they were sending me an Intel Macbook, which arrived in great condition, and I was very pleased that the touchbar was gone and the keyboard had reasonable travel.
So I've been using this laptop for a month now, and got a weird error about the ARM architecture when I attempted a brew install jq
this morning. So I clicked on "About this Mac" and saw it was indeed an M1 chip.
I hadn't even noticed!
Modern war involves sanctions, which impact civilian parts of the economy to a substantial degree. Some open source code is being modified to cause additional damage and disruption in areas associated with the war, but of course there is collateral damage even beyond the intended civilian targets.
This reminds me of the idea of "total war". Wikipedia's summary:
Total war is warfare that includes any and all civilian-associated resources and infrastructure as legitimate military targets, mobilizes all of the resources of society to fight the war, and gives priority to warfare over non-combatant needs.
I think total war is likely much more disruptive than the open-source sabotage we're seeing now, but it seems like a related concept somehow; rather than trying to constrain the scope of the conflict, the aim seems to enlarge it, with the hope that doing so can avoid a catastrophic escalation. This seems a bit like a loose cannon to me: lots of power unleashed, but little control over it. Is there a term for this kind of sabotage of largely civilian infrastructure to support a war effort? The more I read about it, the more murky the issue becomes.
Spoken close to 30 years ago, Carl Sagan's insights have aged well, I think.
From this distant vantage point, the Earth might not seem of any particular interest. But for us, it’s different. Consider again at that dot: That's here. That's home. That's us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every "superstar", every "supreme leader", every saint and sinner in the history of our species lived there - on a mote of dust suspended in a sunbeam.
The Earth is a very small stage in a vast cosmic arena. Think of the rivers of blood spilled by all those generals and emperors so that, in glory and triumph, they could become the momentary masters of a fraction of a dot. Think of the endless cruelties visited by the inhabitants of one corner of this pixel on the scarcely distinguishable inhabitants of some other corner, how frequent their misunderstandings, how eager they are to kill one another, how fervent their hatreds.
Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves. The Earth is the only world known so far to harbor life. There is nowhere else, at least in the near future, to which our species could migrate. Visit, yes. Settle, not yet. Like it or not, for the moment the Earth is where we make our stand.
It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another, and to preserve and cherish the pale blue dot, the only home we've ever known.
Carl Sagan, 1994
I've been working on generating an Atom feed for this site (because RSS should never die!) and in the course of my research, I discovered a conversation on Twitter from someone else who was working on migrating their blog to Tiddlywiki. The name "Joe Armstrong" rang a bell, and it hit me suddenly...this can't be the late Joe Armstrong of Erlang fame, can it? But further poking around showed it was indeed. His journey to use Tiddlywiki really hit me; it's just so similar to my own, from thinking of Tiddlywiki as a note system to discovering that it's really more of a database and programming language disguised as a web page. He wrote an interesting post in 2018 about wanting new blogging software that would stay still so he could focus on blogging rather than blog software. He even has a quote in there that really hits home for me:
I decided to take the easy way out. Write my own static site generator. Moreover I would use no external dependencies...Are you crazy - no dependencies at all? Perhaps I am crazy, but every time I've included somebody elses code it has turned round and hit me in the face a few years later.
So what does he go with in 2018? Emacs' org-mode. Ah, I know this path well! But I share his viewpoint: if you want a low-maintenance site, you need stable, monolithic building blocks that you assemble yourself into the desired solution (stuff like SQLite, Fossil, Emacs, and Tiddlywiki fill the bill nicely). Joe started with org-mode, and then added his own embedded Erlang tags to automate things. But, facing similar challenges that I've faced (I'm sure), a couple of years later, he discovered Tiddlywiki. He has a post about his eureka moment working with it. It has this quality that isn't apparent at first: it's a full-fledged system for manipulating a database of notes (and the notes themselves contain the system!), so no extra embedded language is needed. Instead, it not only has wikitext (with macros and widgets), but can embed JavaScript code as well. This self-contained package is self-sustaining, needing no updates unless they are desired. This property makes it different than other systems (he mentions Jekyll and Hugo): both are dependent on separate language ecosystems (Ruby and Go, respectively) as well the associated projects being maintained. His tweet about using Tiddlywiki because it will endure really resonates with me:
you can hopefully read [my posts] in 1000 years
And then, just months later, he was gone. For me, finding his posts about all this only three years later is amazing...I never knew Joe, but I've long been a fan of Erlang, and it turns out he was a fan of the same sort of tech that I've found inspiring. The great news is that, since he switched to Tiddlywiki, his site should stay up as long as Github will host it, and even then, if anyone has saved the wiki (I have!), it can easily be hosted elsewhere, not only over the web, but also over newer technologies like IPFS. And that's great! His blog has a host of interesting posts about crazy stuff that I adore like Sonic Pi and the joy of really bad websites. If you're curious, the link is below, as usual.
I worked with autonomous vehicles a bit, and one of the elements of machine learning that jumps out from that time is the large investment in structuring the neural net and deciding overall architecture, as well as curating a good data set for training.
Notably, the thing that can be made to be quite fast is the training itself. This may not be that surprising, but in the context of the human mind, it's kind of amazing...it very much reminds me of the scenes in the first Matrix film where the characters can learn a skill (like kung fu or flying a helicopter) in a matter of seconds. We're not quite to that "instant expert" effect with AI, but we're not far off:
"It takes about an hour for the agent to learn to drive around a track. It takes about four hours to become about as good as the average human driver. And it takes 24 to 48 hours to be as good as the top 1% of the drivers who play the game."
A.I. has mastered 'Gran Turismo' — and one autonomous car designer is taking note
Wired is pointing out that Firefox is now "flatlining", after dropping in browser market share from 30% in 2008 to less than 4% today. They're 100% right: Firefox has no clear future.
I'm a niche user, so while I have strongly-held beliefs about what I like in software, I know that most others won't care about those same things. But since Firefox is bleeding users at high speed, I'm going to outline what I would like to see in Firefox that could give people a reason to use it again.
Split View
Remember browsers before they had tabs? Opera started the craze, and it took off and was adopted across all browsers over a few years. The next frontier is a split window view. While I normally would argue that the window manager should be doing this, most folks don't have mastery of their window manager, but could easily make use of a "split vertically" and "split horizontally" option.
Better Bookmarking and History
Lots of folks leave tabs open forever, and when they try to use bookmarks instead, they find themselves overwhelmed with bookmarks. This lack of organization within the browser opens them up to having companies organize their information instead: find that Twitter post using Twitter instead of the browser, just search Google again to find that recipe you were reading yesterday, etc. As a privacy-centric product, one thing Firefox can do is have a UI that pops up when the user is tying in the omnibox that progressively filters all their bookmarks based on the input, prominently highlighting when the bookmark was created and the last time it was visited. This is sort of available using * <bookmark name>
, but so few know about changing search bar results on the fly that the feature might as well not exist for 99% of users.
Keyboard Customization
Many apps allow keyboard shortcuts to be customized, but browsers tend not to. In particular, it seems silly that extensions can't overwrite defaults like Ctrl-w, Ctrl-p, Ctrl-n, and Ctrl-t.
Serious Tools for Addon Management
Every addon is now a potential supply-chain attack against end users, so there is a lot of value in vetting high-profile extensions. They currently do this for extensions like uBlock Origin and Singlefile, but not other heavyweights like Vimium, Dark Reader, HTTPS Everywhere, and Decentraleyes. Providing a system that not only vets the code during install, but also when updates happen has a lot of value to folks that want to customize Firefox's behavior.
The general theme is that Firefox can give users a reason to come back by offering what other browser can not or will not. Instead, Firefox has been chasing what other browsers do, and this means they are always behind. Rather than focusing on "personalization" that allows users to change the color of the browser chrome, they need to focus on "functional personalization" that allows users to change the behavior of the browser in a safe way.
The Aboriginal Flag (I think that's a proper name?) has a wikipedia page that's fascinating. While it currently shows the flag, it has an unexpected caption:
The above file's purpose is being discussed and/or is being considered for deletion. See files for discussion to help reach a consensus on what to do.
Of course, this is because the creator of the flag, Harold Thomas, has asserted copyright over the flag, prevented it from appearing in lots of places, and now negotiated with the Australian government for a payment of $20M to allow the government to use it, and it's still not in the public domain. I'm not going to embed an image here, but it's worth describing in its entirety, just so we can marvel again at the fact that it is both copyrighted and that the Austrailian government paid $20M for it:
The top half of the flag is black, the bottom half is red. A yellow circle with a diameter half the height of the flag is at the center.
I'm in the wrong business!
Australia Pays $20 Million To Buy The Copyright Of Aboriginal Flag, But It's Still Not Public Domain
I run ublock origin, so I notice when sites load resources from central repositories like Google, Cloudflare, or the popular CDNs for code, like unpkg. These resources are awesome and I'm glad they exist, but they introduce a dependency on an external service, which can change at any time. So I tend to avoid Google fonts because I prefer hosting webfonts myself. I discovered "google webfonts helper", which is a no-hassle way to download only the fonts I need for my page and then host them myself. This is particularly useful with frameworks like Remark, which I use to create presentations.
The problem with companies that maintain such an iron grip on the ecosystem is that in the beginning, it seems like that "iron grip" is a feature. Without checks and balances, the company can solve problems and push the ecosystem ahead at a breakneck pace while onlookers cheer the company's incredible efficiency.
But then the problem emerges: what happens when the company no longer makes decisions you like? Suddenly that grip starts feeling less like a feature and more like bug. Apple has put a lot of their revenue eggs in the App Store basket, and given that Apple has more economic pull than most countries, laws aren't going to be particularly effective in preventing them from getting their way: rules have two problems. They require enforcement (which is expensive), and they don't change incentives, so malicious compliance becomes the order of the day.
Developers react to 27% commission with astonishment and anger
Quick context: the author of a popular JavaScript package colors updated the package with malicious code that would spin in an infinite loop. This update was picked up globally because of how JavaScript package management works, and blocked deploys until a fix could be applied. For companies that don't test before deploy, this took down production.
The HN discussion about this article is extensive, much if it focusing on whether what happened with colors is really an "attack" or not. While that question is interesting, the article itself has some really good insights about package manager behavior and how it affects the overall ecosystem. I've worked on software build infrastructure of a few years, and it's a very closely related problem to package management. The key insight that Russ highlighted for me was that the way a package manager resolves dependencies has second-order effects on the how resilient the overall language ecosystem is to errors, whether intentional or not.
In particular, Russ' distinction between a high-fidelity build and a low-fidelity build seems extremely useful to me, and I hadn't run across it before. In short, high-fidelity builds resolve dependencies by using the latest transitive dependencies that direct dependencies have already tested with. Low-fidelity builds don't follow this pattern, and therefore suffer when new versions of packages appear that are broken and/or incompatible. Russ makes several other points around this, so the whole post is worth a read, but I wanted to highlight this aspect that was both new to me and useful. I will specifically look for this trait in package managers I evaluate, as it would have saved me a lot of pain in previous JavaScript and Python projects!
What NPM Should Do Today To Stop A New Colors Attack Tomorrow
I recall some years ago hearing about the "copyright cliff" from some informal research that was done related to Amazon listings. In an effort to turn up that research again, I ran across this (much more formal) treatment from Berkeley Law. The basic idea is that publishers aim to maximize profit, which means focusing energy on publishing the most profitable works. As the terms of copyright have increased, it has also increased the number of works that remain under copyright, but are unpublished and unavailable. Accordingly, the researchers found a negative correlation between copyright and availability: works that are still under copyright are more likely to absent from store shelves.
This is interesting because it's a good example of a cost to society that is difficult to measure, and therefore gets ignored (McNamara Fallacy). It very much reminds me of how other industries can externalize their costs onto society in areas where money isn't a good measure of value.
A little over a year ago, I wrote a bit about the bet about when VR headsets would sell more than 10M units in a year. The original bet was made in 2016 about whether this would happen in 2019. It clearly did not. While I haven't yet pulled industry-wide numbers, The Verge reported in November 2021 that the Quest 2 alone has sold 10M units since it launched in Oct 2020, about 13 months prior to The Verge's report. I'd say it's pretty safe to say the 2021 was the year VR headsets sold more than 10M units.
We're a few decades into the internet now. One of the most interesting questions to me about the internet is how jurisdictions work. Since the internet is a sort of 'overlay' on the physical world, do borders extend into the overlay, or are they ignored? There isn't a checkpoint or customs like there is when you land in another country and need to present paperwork. So how do we determine what jurisdiction a site can be sued in?
This ruling brings us one step closer to being able to figure it out, and I think it brings us closer to the maybe-naive answer: the jurisdiction is determined by where the servers are. Now, it doesn't go that far, or even really mention servers. But it does seem to conclude that just because a site is visible in a place does not mean the site can be sued there.
Very brief background: a citizen of Texas wants to sue the Huffington Post for libel. But before that can proceed, the court needs to figure out if Texas is the right place for the trial. Eric Goldman's writeup is worth the full read, but in short, the majority finds that simply because a website is accessible to people in a given jurisdiction (in this case, Texas), and even if that website advertises to people in that jurisdiction, it does not mean the website can be sued in that jurisdiction. This approach seems reasonable to me, though it really seems to only apply to sites hosting speech, like blogs or news. There is some language in the majority opinion that suggests if the site had more firm ties to Texas, they might have found differently:
...but its story about Johnson has no ties to Texas. The story does not mention Texas. It recounts a meeting that took place outside Texas, and it used no Texan sources.
But at least in the case of speech otherwise not tied to a particular jurisdiction, I think this ruling makes a ton of sense; I'm not even sure how one would implement the dissenting opinion, which Eric summarizes:
So the dissent apparently is fine with HuffPost being sued anywhere it's geolocating ads, which is likely everywhere. In other words, the dissent would honor the plaintiff's choice of forum. I think the majority reaches the better result.
I agree!
Fifth Circuit Issues an Important Online Jurisdiction Ruling - Johnson v. HuffPost
I've had an interest in distributed computing for a while now, but the user experience is always less streamlined than centralized services. I've never been able to sell peers on the "freedom" aspect or the "no central control" aspect. Because of this, more and more of our everyday lives are taking place over networks that are owned by advertising companies (Google, Facebook, Twitter) whose only real advantage is convenience.
This came to a head this month as the advertising companies decided they didn't want to be a conduit for certain kinds of discussion. This is a completely obvious outcome, and could have been anticipated years ago by anyone with a passing knowledge of software and the business models of these companies. The fact that the U.S. government relies on these companies to communicate with citizens is just wild.
Moderation is a very tough problem for any centralized platform because it's hard to moderate well at scale because moderation isn't a one-size-fits-all problem. Some kinds of speech is fine in certain contexts, but totally inappropriate in others. Centralized moderation is bad at accounting for this, and tends to create moderation policies that lack nuance.
Planetary is a piece of software I've never used! But I am on Secure Scuttlebutt (ID @iOOGrbvjXS1YAQWkL/eBy2UOAzUhQGRRG3p5IBFcnLQ=.ed25519
) and I love Planetary's mission: make Scuttlebutt more usable for non-technical users. I use a client called Patchwork and uses a Go version of Scuttlebutt called Go-SSB what client you use, you can interact with the network in very similar ways.
So what makes Secure Scuttlebutt different from pretty much everything else?
- All your data (and data for people you follow) is collected on your device. This is essentially taking all the computation that would normally happen on servers in a data center at Twitter, Google or Facebook and puts it on your phone or laptop.
- The side-effect of this is that you can read and post when your device is offline, and when it comes back online, it will sync with the rest of the network, sending posts that you wrote offline, and downloading posts others published during that time.
- You are in charge of deciding who you follow and who you block. There's a lot of discussion on SSB right now about one user being able to inherit the list of blocks from other users if they want. This creates "trust networks" to make moderation more scalable.
- It's all open source, so the community can drive development and create new features (e.g. playing Chess over SSB)
- It's a great example of "protocol not product" meaning many products could be created from the protocol. The closest example of this that folks are familiar with is the web: some websites you visit might be applications, others are newspapers, and others are games, but they all use the same set of protocols (mostly HTTPS).
- Because there is no company with growth metrics and engagement metrics, posts are sorted chronologically and there are no ads or "recommended" posts.
- SSB is based on PKI (public key infrastructure), so private messages between users on SSB really are private: they are encrypted using elliptic-curve crypto using libsodium
If this sort of thing is interesting to you, give it a look!
The bet in question is when VR will sell more than 10 million units in a single year. That's the definition of "take off" in this context. As a guy who was writing "VR is here!" in 2012, it's a little bit wild to me that, at the close of 2020, we're still having this argument about whether VR is successful: I've spent about $1000 over the past 5 years on three different headsets (PSVR - $400, Oculus Go - $200, Oculus Quest 1 - $400) and I've had incredible experiences with all three. Farpoint, Doom VFR, and Skyrim VR on PSVR were incredible experiences, the Oculus Go gave me Darknet, BigScreen, and Virtual Virtual Reality, and the Quest has changed my life during the COVID-19 shutdown: Beat Saber, Pistol Whip, and Synth Riders have become a way to stay active even when I'm spending more time indoors, and games like Walking Dead: Saints and Sinners give me an immersive world to explore with a nice narrative. If VR disappeared off the face of the earth today, I'd call the whole endeavor a success. I've thoroughly enjoyed my time in VR.
So it's interesting to me that in the most technical online forum I frequent, there's this air of doubt about the whole thing. All kinds of arguments are brought out: VR offers nothing new, it's not immersive, most people don't need it, and that it's taking too long to have an 'iPhone moment'.
That last one gave me pause. How many technologies ever have an 'iPhone moment'? I'm not sure exactly what that means, but if the bar for "taking off" is that every adult goes out and buys one, I fear most technologies never have an 'iPhone moment'. Here's the thing: the iPhone wasn't revolutionary. It brought a bunch of really new and cool stuff to a device that everyone already owned: the cell phone. So consumers were already on a treadmill of upgrading their cell phone very few years, and when the time was up on their current model, they had an option to get an iPhone. The cost factor was mitigated by a built-in financing system offered by cell carriers (adding monthly charges to an existing bill that cover device cost). It's a good device in many ways, so it sold well. This is a crazy bar to set for most other technologies, especially those that have no immediate precursor.
VR has exactly that issue: no one owns "some other headset" that they'll replace with a VR headset. It's a totally new activity, is often perceived as extremely expensive, and is often associated with a dedicated room and lots of auxiliary equipment like beacons set up around the room. All this has changed now that inside-out tracking is developed, and costs have dropped 10x in the past few years.
Is there anything else that was like this? It reminded me a little bit of how my family reacted when my dad bought a Kaypro II CP/M machine in 1984. No one knew what it was good for, even though it did a few neat things. So I looked back at the sales of PCs through the 80s in Ars Technica's 2005 retrospective on computing. In 1984, Apple sold about a million units, and it decreased from there, selling only about 350,000 units by 1987. Even the juggernaut, IBM PC clones, sold 6 million units in 1987, growing from 2 million units in 1984. It wasn't until 1988 that all PCs clones combined surpassed 10 million units shipped per year, largely driven by business, and the fact that all the different models were largely compatible with each other (the origin of the term "IBM compatible"!) allowing competition to thrive.
After reading and thinking about that, my takeaways are:
- Disruptive products can take a long time to go cross the 10 million units / year threshold. PCs changed the landscape of...everything, and still took more than 10 years to cross that threshold.
- Business sales drove much of the PC market growth. This is not true for VR, and it may be slowing the growth of the platform.
- Compatibility between different models was critical to the PC's success, but VR has been balkanized with different models with differing capabilities that are often tied to stores for each model (Steam, Oculus, Windows, etc.)
I disagree with how Greg cast this:
So what went wrong? Looking back at VR hype in 2016, there were a lot of reasons to be optimistic..
But do I think there are milestones that VR needs to cross before it can become more mainstream. There need to be multiple cheap standalone models that users can choose from without losing a library of apps they've invested in. Proprietary app stores are now absolutely entrenched (for better or for worse), so I expect the way to approach this is for vendors to incentivize cross-buy. It may be a small hit to store exclusives, but given that we're concerned about the growth of the VR platform as a whole, this may be a case where if vendors incentivize cross-buy, they'll end up getting a slightly smaller portion of a much larger pie.
VR also needs to get higher resolution displays (this is happening) so reading text in VR is natural and easy. This will allow more business uses, particularly in a pandemic-stricken world. We have some apps that do this, but unfortunately most are not yet better than their real-world equivalent in terms of convenience, and cross-platform support is lacking. I use Linux almost entirely, with the exception of a Mac I have for work, and Virtual Desktop, which is what I would normally have bought in a heartbeat, only supports Windows. This makes perfect sense, as it's a single-developer project and Windows has more business use than other platforms, but as long as reading text is a challenge, business use will still be a tough sell.
In the meantime, I'll continue using my headsets daily and enjoying the remarkable experiences they bring.
Geeking with Greg: When will virtual reality take off? The $100 bet.
It's not totally unreasonable for a company to unify login under one account. In the case of Facebook and Oculus, however, I don't think it makes sense from a customer perspective. In particular, I'm very concerned with the "black bag" treatment users get when they are banned from the platform. They are often unable to get any information on why they are banned, there is no appeal, and they lose access to hundreds or thousands of dollars of hardware and software they purchased. I don't think any customer would agree to such a setup if they had any choice, but they really don't: the contracts are not something a customer can negotiate, and the Quest/Quest2 are not products with any real competitor right now. Facebook and Oculus are not alone in this approach, and I think it's just as problematic elsewhere (Google comes to mind).
Facebook hit with antitrust probe for tying Oculus use to Facebook accounts - TechCrunch
I was lamenting lately that in 2005, dominant laptop manufacturers made laptops with replaceable hard drives, memory, and batteries, but today, most are sealed and can't be upgraded. This design limits their usable life in a sort of "weakest link" sense: the component with the shortest life determines the life of the device. This bothers me not only because of the cost and waste, but also because if you turn around to buy a replacement, the manufacturer will probably try to sell an "improved" version, even in cases where no improvements were needed.
So when I saw DevTerm, I was intrigued. It's true that it isn't a real laptop replacement, but in the category of "portable utility computer", it comes really close. The batteries are standard 18650 cells, and all storage is replaceable. As far as I can tell, it doesn't even have the option to plug it into the wall: the batteries are charged by removing them and using an external charger. This has the nice side-effect that one could carry a couple of extra sets of batteries to extend its usable time.
I was somewhat saddened that the discussion on HN dominated by complaints about ergonomics. I feel these concerns are mitigated somewhat by the nature of the device: it's not likely that someone will use this the same way one would a laptop (many hours a day, every day). There was no discussion of the design in terms of longevity: open source design with replaceable components.
stjo wrote:
As many have noticed, it is quite expensive and unergonomic. Their selling point is entertainment, nostalgia and cyberpunk feel, not really a useful tool.
I think I disagree, but I'm not really sure, since I'm a sucker for computing nostalgia and cyberpunk aesthetics. I see the flat design as a unique feature that doesn't put a screen between me and whatever is in front of me. I love the idea of a tablet computer, but as an Emacs user, the lack of a keyboard is an ongoing source of frustration. This device remedies that, so I view DevTerm a bit like the tablet computer I always wanted: leave it running Emacs all the time, and use it to code, take notes, ssh into my other machines, all less intrusively than a laptop would be.
The device isn't available yet (the site says 2021), so much of this is my speculation, but I'm intrigued nevertheless.
My old Google WiFi router from TP-LINK ended up triggering some kind of kernel bug in Linux and tanking network performance by causing the NIC to constantly reassociate with the router every few seconds. After discussing with System 76 engineers, I decided to try a new router to perform a controlled test, and I didn't like that the Google router had no web interface. So I researched and picked up the RT2600ac from Synology. It not only resolved the problem, but provided better coverage to our back room than Google WiFi mesh did. It also had more advanced configuration, providing site filtering and safe search on a profile-specific basis. Profiles are essentially groups of devices. It worked well for us, but recently we started having debilitating outages where latencies would spike 400% and packet loss was close to 80%. I spent days troubleshooting, looking at Comcast outage maps, and talking with Comcast support. Their ultimate resolution was for me to buy a new cable modem, citing my current model's age (6 years) as past EOL. After I got off the phone with them, I decided to check the performance coming directly out of the cable modem. Sure enough, it was fantastic, which made the router the prime suspect. After some testing, it turns out that the safe search features incurred this performance penalty at times. I've since disabled them and performance is back to previous levels. Overall, the router is superb, but the safe search features don't seem to be totally nailed down.
One of my favorite features on the Switch is the ability to, at any moment, capture the last 30 seconds of gameplay and save it. It's a bit of a trick because of the performance implications of constantly capturing gameplay at a decent framerate, but Replay Sorcery manages to pull this off using JPEG framecaps stored in a memory-backed ring-buffer. Very glad to have an open source tool for instant replays!
GitHub - matanui159/ReplaySorcery: An open-source, instant-replay solution for Linux
A few highlights worth mentioning:
- Sacha Chua presenting Emacs news highlights
- John Wiegley presenting an Emacs development update
- Richard Stallman presenting on NonGNU ELPA
- Leo Vivier presenting two talks on Org-roam
There's also some fun stuff!
- Vasilij (wasamasa) Schneidermann presenting on State of Retro Gaming in Emacs (?!? Sounds awesome!)
- Matthew Zeng presenting on Extending Emacs to Modern GUI Applications with the Emacs Application Framework
I'm not sure how much I'll attend live, but I'll definitely watch all the talks I miss from the recorded streams. Exciting!
This is huge! I somehow had totally missed that Solid was this far along, and that Bruce was working on it. I have a good friend who worked at a startup, Singly, which, though now-defunct, was aimed at storing data in a way not so different than Solid. Here's how TechCrunch wrote of it in 2012:
The company began its life as The Locker Project, which would capture data from a user’s online activities (e.g. tweets, photos, checkins, etc.) and archive those items in a storage locker of sorts. Those efforts continue as an open source project, but Singly as it stands today is the commercial result of the problems solved while building The Locker Project. With the commercial launch, the company will offer the “app fabric” for $99/month for up to 1 million users, handling authentication, friend-finding, social sharing, and the like. Meanwhile, pricing is available upon demand for those with more complex data needs (aka the “data fabric,” as they call it), including syncing, storage, filtering, de-duping, intelligent indexing and more.
And here's an excerpt from the Inrupt press release about Solid:
The idea behind Solid is both simple and extraordinarily powerful. Your data lives in a pod that is controlled by you. Data generated by your things — your computer, your phone, your IoT whatever — is written to your pod. You authorize granular access to that pod to whoever you want for whatever reason you want. Your data is no longer in a bazillion places on the Internet, controlled by you-have-no-idea-who. It’s yours. If you want your insurance company to have access to your fitness data, you grant it through your pod. If you want your friends to have access to your vacation photos, you grant it through your pod. If you want your thermostat to share data with your air conditioner, you give both of them access through your pod.
Pretty cool development to the issue of data privacy...I hope this takes off!
A couple of days ago, I wrote about my thoughts regarding Apple's trajectory with M1. I didn't mention Big Sur, but that's part of the same pattern: Apple is going to continue to tighten the screws to prevent customers from running software of their choosing, all under the banner of security. This post discusses both, but was prompted by Apple's OCSP servers getting overloaded yesterday, which resulted in everyone finally realizing that Apple computers request permission to run programs from Apple every time they are run. The post sums up my thoughts quite well, so I won't reiterate here beyond this summary from the post itself:
The day that Stallman and Doctorow have been warning us about has arrived this week. It’s been a slow and gradual process, but we are finally here. You will receive no further alerts.
I've been looking for a planning tool for a small team for some time. I haven't even tried this, but I just love the fact that a game developer wrote this over 14 months because of a vacuum of planning tools that aren't cloud-oriented enterprise SaaS. It's not open source, but you can build it for your own use and it runs entirely offline, with its datastore in plain text (JSON) so it can be version controlled. This is completely up my alley! If you're interested in a pre-compiled binary, the author has made releases available on itch.io for $15. Very cool!
GitHub - SolarLune/masterplan: MasterPlan is a project management software
I was a huge fan of Apple from 2007 to 2011. In 2011, I started to get this vibe from them that they wanted to make iPad and Mac the same. I think it was a subtle change in how application status was reflected in the dock. The upgrade to Lion removed a light in the dock below applications that were "running" rather than simply "pinned". I sometimes like to manage resources myself by killing applications and starting others. Apple's response to criticism on this change was essentially "You don't need to know if an application is running or not." I found this disconcerting, and discovered I fell right back into my old Linux habits without much trouble at all. I essentially eliminated all Apple products from my household because of this decision.
It seems like an irrationally strong reaction to such a small change. And I think it was! My thinking at the time was that Apple was trying to shift the Overton window towards having desktops start to follow a mobile computing paradigm, starting with process management. I see this as a risk because desktops are a last bastion of relatively free computing: open platforms that can developed for, forked, and improved without paying any fees, having to agree to a Terms of Service or End User License Agreement. But Apple had started to show signs of bringing the App Store to desktops in 2010, though that App Store was distinct from the iOS App Store. I had originally considered its launch Apple's attempt to replicate the success of the App Store on iOS.
But as OS X removed this light from the dock, I saw this trajectory whose end goal is likely to be total vertical integration, a fortress of technology that is completely controlled inside its borders. Some see this as a good trade for potential security benefits. I see it as a poor trade for the freedom it removes. So I left.
This week's announcement of the M1 is another step. M1 machines will run iOS apps natively, even when they have not been customized to do so by the developer. This change, combined with Catalina's "phone home for every executable" and increasingly-arduous hoops to jump through to run non-approved apps suggests Apple's strategy is to make one App Store for all devices, take their 30% cut of all purchases, and remove or allow software at will. Such a system would make computing very sad for me. It would be a big loss.
Naturally, Apple represents just one facet of our computing future, and the scope of that universe is constantly expanding. But their decisions set trends, inspiring followers to propagate them widely. And while I don't mind having Apple around as an option for people, I wouldn't want their computing model to dominate. I think a lot of the future hackers and makers are made by growing up around open technology and playing with it. A world filled with closed devices would really suppress this exploration, I think. I don't think anyone wins in that scenario.
Apple unveils M1, its first system-on-a-chip for Mac computers - 9to5Mac
Google Photos is one of their most amazing products...I've been a very happy user for years. It's so good, it's just the most obvious choice for any casual photographer. No crazy upsells, great performance, good feature set.
I recently realized that a lot of my favorite photos are backed up to Google and...that's it! Folks get their accounts closed with no explanation and no recourse, though. So I used Google Takeout to try and get a copy of my photos for safe-keeping. The request took two days to fulfill, and resulted in 39 archives, each 2GB, with no reasonable way to automate the download. The system has given me 7 days to download all 39 2GB archives before they are deleted. The process is time-consuming, so I have 1 day left and I still have to download the final 18 archives. Wednesday night project!
But Google is also starting to paywall some features of Photos, so I'm sensing this is the beginning of the end for the greatness of Photos, as new photos will start counting against storage quotas.
Google's effort to monetize photos makes complete sense, and their strategy appears to be to incentivize purchases of Google One, which will include more storage and also unlock more features. I'm now hoping that I can sync photos to my Synology and/or NextCloud accounts instead. It's not so much that I mind paying, it's that Google likes to change the ground rules after you've joined, which means I'm forced to adapt on their schedule. But I like to think of myself as the customer, where Google should be adapting to my needs, not the other way around.
Come June 1, 2021, all of your new photos will count against your free Google storage – TechCrunch
The issue here is that new content distribution mechanisms don't serve all the customers, so customers write tools to solve their problem. But the way copyright law is written, those tools are illegal. Distributing and using them was made criminal by the DMCA. Promoting science and the useful arts isn't just about money, it's also about providing access to the works.
I could watch Leonard discuss this stuff all day...it's just so interesting to me how the law intersects with technology, and how that plays out in practice.
Leonard discusses a number of interesting topics in this video. He reviews relevant parts of the text of the laws themselves, but also dives into the relevant case law, which is often overlooked in discussions with non-legal folks trying to reason about how to interpret the law.
As a programmer, I often want to argue that tools like youtube-dl
are legal. I feel like the should be totally legal...streaming content comes and goes for lots of reasons, and being able to timeshift it to watch on a train or airplane feels a lot like the behavior that was determined to be legal in the Betamax decision from 1984. In this mindset, it's tempting to argue that youtube-dl
is totally legal, and it's upsetting when folks come along and assert that using it is a crime.
But the sad truth is that the DMCA is really badly written, and criminalizes lots of behavior that many people would think is completely reasonable (like using youtube-dl
to download videos from YouTube that are Creative Commons licensed). Understanding the current state of the law is a first step towards figuring out how to improve it.
I have a friend that worked at Google for some years. Over lunch, I mentioned my frustration with how they cancel products, and he mentioned that Google's poor behavior is limited to free products, but once you pay they are better. This story is a nice counter-example, as I expect Nest Secure wasn't free or even particularly cheap.
Google Kills Nest Secure, Can't Be Bothered To Explain Support Roadmap - Techdirt
It's sort of neat that Tomb is a zsh script leveraging dm-crypt under the hood. It's a shame that it is Linux-only, but I use Linux enough for most things that it's a convenient way to carry around encrypted containers on e.g. a USB drive, so if you lose the drive, at least your data isn't compromised.
I worked at Amazon Music when Google Music launched, and I was blown away by how good it was. I thought for sure that Amazon would give up on their music store before Google did. Fast forward 9 years, and I'm listening to Amazon Music while I write this.
In the intervening years, I've learned to avoid Google products as much as possible, both in my personal and my professional life, simply because you can't count on them. Google is like this genius uncle that always has a new contraption that amazes you, but can never show up on time, and when he finally does show up, he can't even remember what cool contraption you're talking about when you ask if you can play with it again. It's neat the first couple of times, and then you get frustrated and move on with your life, focusing instead on projects you can count on.
I ran across this in a discussion on HN of sites that carry high-quality public domain ebooks. I value archival-quality material highly, from webpage snapshots to books to videos and music. Having a site that focuses so much on quality and polish is a great addition to the giant archives that drive volume like Gutenberg and Archive.org.
Fennel is remarkable for a few reasons. I'm a big fan of Phil Hagelberg's work in general, and Fennel is no different. It's a tiny library that allows lisp programming in Lua, and when coupled with LuaJIT, provides a tiny, fast, portable scripting environment.
In particular, I'm impressed by Fennel because it is focused on limiting runtime cost vs. Lua and permits compilation of Fennel to Lua via --compile
, which is fantastic in retaining the slim dependency tree I love about Lua scripts.
After years on Google Reader, and then TT-RSS, and then basic python script I threw together, I've spent the last year or so on FreshRSS. It requires a DB, but sqlite works beautifully, making installation trivial. It's been very stable and provides a great "river of news" experience.
RSS in general acts as my "information nervous system", collecting up data from across the net and providing a way for me to filter, read, and share it with external tools. This site is powered by Shaarli, which is a bookmarking tool. What's cool is that FreshRSS can dispatch out to Shaarli directly, so as I'm reading through feeds, I can hit a button to share the link and my thoughts on it. Together, they are enough for a basic social network, since Shaarli sites also produce RSS feeds that can be consumed by others' RSS readers.
I'd like to see this more broadly adopted! If you're interested in democratizing social networks, it's a very nice approach that works right now.
Shaarli is a great piece of software, but does seem to lack a bit in the variety of themes available for it. After upgrading, I found that the theme I'd installed that was a clone of the Delicio.us theme actually caused the site to 500, so I went looking for an alternative. This theme looks serious! Lots of commits, and based on the screenshots, it's very faithfully sticking to material design.
Sadly, while I've installed it on this instance, I'm still getting errors when I try to enable it, so I'll have to dig in and see if that's something with my instance or if I should look at updating the theme. This instance is using version 0.12.0 of Shaarli, but the latest release for the theme is 0.11.0.
A good friend sent this my way and I enjoyed it immensely. A really cool take on the Turing Test.
I think back to 2005, and wonder how cool it would be if I had a folder fully of web page snapshots showing not only what I was interested in back then, but also what the web looked like then. Even though many of the blogs I read back then are likely long gone, if I had an archive that showed the pages the same way they existed, I could still browse through the same stuff today that I thought was useful or interesting back then.
That's what SingleFile does. Under the hood, it uses data URIs to capture page assets and encode them into the text of the page itself, so the resulting file, while as large as the sum of the assets it contains, is a stand-alone artifact that can be opened in any browser without an plugin at all. In my testing so far, it works extremely well.
There's an automatic archive function I haven't played with. I wonder the feasibility of indexing the archive files and layer search on top of the archive? Those two features together would be quite potent.
SingleFile: Web Extension to save a faithful copy of a complete web page as a single HTML file
I have an interest in personal information management, and as a heavy user of Emacs, it's hard for me to fairly evaluate software outside of Emacs, simply because I'm so used to everything Emacs provides. I tend to discount other programs because they lack some feature I'm used to in Emacs.
But with Joplin, the synchronization through NextCloud, good client support for Android, Linux, and MacOS, and end-to-end encryption, and the ability to instantly edit my notes in an external editor ('emacsclient'!) really make it my dream system for notes.
Joplin - an open source note taking and to-do application with synchronisation capabilities
I wish this were the curriculum I learned from. I reason intuitively about most of the math I do daily, so learning symbolically has always been a reach. Better Explained presents excellent intuitive explanations for various concepts that make me feel more like I understand the dynamics of the system I'm working on.
Fabien's explanations are always a joy to read. I'm a huge fan of his "Game Engine Black Book" series, and I had no idea he did smaller analyses like this one, which explores the structure and workings of Andrew Kensler's famous "business card ray tracer".
I remember back in 2007-2010 I gave a lot of thought about how to share data securely between my Mac (I was just getting into Mac back in 2007, around when they switched to Intel) and my "legacy" Linux machines (I ended up dumping Mac in 2011, so keeping Linux in the loop turned out to be a good idea). Inevitably, TrueCrypt came up because there are simply so few cross-platform on-the-fly encryption systems. But after 2014's very strange shutdown of the TrueCrypt project, as well as the strange licensing situation around TrueCrypt, I expected to see a bunch of alternatives emerge (sort of like happened with RSS after Google shut down Google Reader). But it doesn't seem like this happened. Now in 2019, when I search for cross-platform disk encryption, VeraCrypt comes up, which is essentially a continuation of TrueCrypt, and everything else is either barely maintained, only for one platform, or closed source. This surprises me somewhat, but given how much work goes into creating an maintaining advanced security software, perhaps I shouldn't be!
VeraCrypt - Free Open source disk encryption with strong security for the Paranoid
I've only read about 20% of this, but so far I'm extremely impressed at both the quality of the observations (package managers are as much a human problem as they are a technical one) and at how well-structured the ideas are.
Skinning was all the rage back in the '90s. I remember installing LiteStep (which seems to still be around!) on Windows 98 to get a custom look-and-feel. Winamp3 was all about crazy skins, and theming in general was just part of the hobby that was using a computer. Fast forward 20 years, and the newest feature I see in every app and operating system is 'Dark Mode'. I think the change is partly because we became more thoughtful about the security impact that cosmetic changes could have. As with all rules, there are valid cases that get prohibited, and developing a consistent, high-quality dark theme for Firefox is of them. Luckily, ShadowFox adopts an unorthodox approach that side-steps some of those security measures by distributing itself as a script, rather than a browser addon. This is clever, and result is quite beautiful to my eye, particularly the consistency across the 'protected' pages like 'about:addons' and 'about:profile'. I'm very happy some folks took the time to work on it!
I was skeptical that Delta Chat could work well. It was nevertheless attractive because of its 'self-hosted' nature, so I installed it on my family's devices and set up some accounts. Onboarding needs to be smoother, but it does work. Account detail sync seems flaky, so my family can't see my profile photo, even though I set one.
That said, Delta Chat is amazing. It uses plain old email over IMAP connections to work its magic, and it uses end-to-end encryption to protect the content of messages. Photos work great, and it also supports audio messages, which is a lot of fun. I highly recommend trying it out.
I really enjoy almost all of Sam Altman's writing, and this is no exception. His insight about compounding yourself is a real challenge, since there aren't that many ways to do it (though he says the opposite!) He lists a few: capital, technology, brand, network effects, and managing people. As a company, I understand this list, but as an individual, I'm limited to technology and managing people (there's an argument that network effects applies as well). Interestingly, these are the two areas I've focused on most in the past 10 years or so, so there's at least some alignment there.
John Carmack has always been an inspiration to me, and he seems to be thinking a fair bit about exactly what technology will lead to that sort of compounding:
I struggle with internal arguments around how much pursuing various new things "compounds" on my current base. I hate to say "not worth my time to learn that", but on the other hand, there is a vast menu of interesting things to learn, and some are worth more than others.
There are some obvious guidelines. One is to focus on technologies that are what I call "prime abstractions": they can seem uselessly abstract, but once you learn them, they have broad applicability across both discipline and time. These technologies are often the least attractive, especially to newcomers, since they tend to not lead to immediate results. Buckling down and learning procedural, object-oriented, and functional programming idioms is like this. Even learning SQL well has some of these elements. I was lucky that I picked up Linux back in 1997 as my primary computing environment: it has endured beyond my greatest expectations, and has had a huge compounding effect in my work for the past 20 years.
But then there's the harder question: given what I currently know, what should I learn next to either compound my effectiveness, or set myself up to learn the next thing that will compound my effectiveness? For me, it's been difficult to figure that out in the realm of technology: there's a ton of stuff to get interested in, and it is sometimes hard to discern which will be most useful when coupled with my existing knowledge. But, as an engineer focused on technology for so many years, I think it was a slam dunk for me to start focusing effective methods of working with others. Not just management (which is world unto itself), but working with peers, managers, executives, contractors, and other teams as well. I have many more years to continue improving in those areas, but I would like to start using some time to focus again on technology as well.
Seems like Chromium is shifting to remove an API, webRequest, in favor of a more limited version, declarativeNetRequest. In doing so, it is implicitly favoring the API of a more limited, commerical net blocking utility over more open-source, non-commerical rivals. Gorhill summarized the top-level effect of the change really well here, I think:
Extensions act on behalf of users, they add capabilities to a user agent, and deprecating the blocking ability of the webRequest API will essentially decrease the level of user agency in Chromium, to the benefit of web sites which obviously would be happy to have the last word in what resources their pages can fetch/execute/render.
I hear very little discussion about what a user agent really is, and about the right of users to manage, shape and ultimately control what code is downloaded and run on a computer that they have purchased. As Cory Doctorow concluded in perhaps my favorite of his many excellent works, Lockdown: The coming war on general-purpose computing
Freedom in the future will require us to have the capacity to monitor our devices and set meaningful policies for them; to examine and terminate the software processes that runs on them; and to maintain them as honest servants to our will, not as traitors and spies working for criminals, thugs, and control freaks.
Chromium: Deprecating the API uBlock Origin and uMatrix Depend Upon
My main gripe with mainstream devices is that they cater to short term goals: shiny, cool features that will be old and boring in 12 months, so you can throw away the device and buy a new one. The Pyra, on the other hand, is a device built to optimize for users, rather than the business. It has a high price, but it's made almost entirely in the EU, and it's modular, so components like the screen and port configuration can be swapped out while leaving the CPU and RAM intact, for example. It trades sleekness for power, choosing to run plain ol' Debian and offering a slew of USB, HDMI, and SD ports for expansion and customization.
For me, I expect it would be the perfect device to throw in a backpack that mostly can be used for gaming, but also can browse the web and even support lightweight programming. Best of all, it can run Emacs, so I'd never be without a comfy computing environment!
I honestly don't know what to make of the JavaScript ecosystem. I know React is very popular. I find the tooling obtuse, though. Proponents seem to think bringing in npm, yarn, babel, and webpack into an existing stack isn't complex, but my experience with that process, though limited, tells me it carries enough complexity that anyone should think twice about using it for anything but the most demanding of projects. I think React is contributing to this notion that web pages are obsolete, and web apps are the new standard, and I think that trend is harmful. I fall back to the Principle of Least Power: if the problem you're solving can be addressed with something simpler, that's the way that will lead to the greatest success. React is powerful, but most often not needed.
Intercooler builds on JQuery, and creates a declarative language for front-end code that informs API design that retains simplicity while still providing a fast, modern UX. Check it out for your next project!
Much like Remark, Markdeep is a client-side rendering framework for Markdown. It's interesting because supports building a static site with Markdown, but with no generation step: you can simply include the 'markdeep line' at the end of the file and publish it. Like Remark, it requires client-side JavaScript to work, though. Because the rendering is done in JavaScript, it integrates with Highlight.js to perform automatic source code highlighting, and it has ASCII art diagram rendering, which reminds me a bit of ditaa. The output is simple and attractive...it seems like a nice way to publish one-off posts.
There are a couple of cool tweaks that make it work well with Emacs, which has the superb markdown-mode by Jason Blevins.
First, even though the file names for markdeep should be .md.html, Emacs works best if editing is done in markdown-mode. This can be accomplished with simple header:
<!-- -*- mode: markdown; -*- -->
Second, you don't need to run markdown on the file to view it, but rather simply open it in a browser. It's easy to bring over HTML-mode's C-c C-v
binding to markdown-mode:
(define-key markdown-mode-map (kbd "C-c C-v") 'browse-url-of-buffer)
This allows a nice workflow of editing and previewing with no further adjustments, I've found.
I've argued in the past that Remark is not as good as RevealJS, because it relies on client-side rendering of the markdown payload to render the slides. This makes pages published using Remark much less visible to search engines (which I can live with), and unusable unless the client has JavaScript support (harder for me to live with, given my love of EWW).
What I've neglected to mention in previous discussions is that by putting rendering into the client, tools like Remark supply the end-user with the 'source code' for the presentation as a side-effect of allowing them to view the presentation, which makes the presentation a self-contained artifact. I think this approach has more merit than I originally gave it credit for, so I plan to give Remark a try in some upcoming decks and see how I think it compares.
This decision is the correct one, and it's good that it is being formally recognized as such. I am somewhat surprised it took this long, however. Copyright has little place preventing folks from using, maintaining, and repairing devices they legally purchased, regardless of whether the manufacturer is trying to prevent them from doing so. Cory Doctorow makes an excellent related point, I think:
They have this pretense that DRM is ‘effective’ but then they grant a ‘use exemption’ and assume that people will be able to bypass DRM to make the exempted uses, because they know DRM is a farce...The thing is that there's these two contradictory pretenses: 1. that DRM is an effective means of technical control, even in the absence of legal penalties for breaking it, and; 2. That once you remove the legal stricture on breaking DRM, it will not be hard to accomplish this.
In Groundbreaking Decision, Feds Say Hacking DRM to Fix Your Electronics Is Legal - Motherboard
I really like these guidelines, and I like that they encourage people to be better even if they aren't breaking any rules. RMS is thoughtful, as usual.
I've struggled for years with using cool packages like rspec-mode because I use tools like rbenv to manage Ruby versions. The problem is that rbenv relies heavily on shell customizations to work, and when Emacs spawns a subprocess, bash is run as neither a login shell nor as an interactive shell. If you carefully read the section of the bash manual about invocation, you'll find that neither .bashrc nor .bash_profile are sourced for shells that are neither interactive nor login. Sure, you can tell Emacs to use -i when it invokes bash, but this causes other functions that rely on rapid subprocess creation to behave strangely, or at least be quite a bit slower.
But I kept reading the manual, and it turns out the solution lies in the use of the BASH_ENV
environment variable, which, if specified, must point to a file that bash will happily source immediately after it is invoked. By creating a file that added $HOME/.rbenv/bin
and $HOME/.rbenv/shims
to my path, I was then able to (setenv "BASH_ENV" "/home/rpdillon/rbenvsetup.sh")
in my initialization, I can now invoke rpsec from within Emacs on-demand, exactly as intended. And I also got to learn about BASH_ENV
!
Carmack's talks continue to be the best possible use of 90 (or 120, or 180) minutes for me. His crisp, honest, well-grounded and well-articulated discussion around real-world work to push the technology envelope makes him worth listening to any time I get the chance.
It's super interesting to hear him discuss how computationally demanding VR is. The Oculus Quest was announced, which is a standalone 6DOF headset which is computationally roughly equivalent to previous gen consoles like PS3 and XBox 360. But, as he points out, those were rendering at 30 FPS at 1280x720, often with no MSAA. To support VR, the system needs to render more pixels faster: 1280x1280 times 2 (one image for each eye) at 72Hz. This means roughly 8x the number of pixels being pushed, with an additional tax to add on 4xMSAA.
I've been following Josh Parnell's work on Limit Theory since shortly after it was kickstarted. His early videos were simply breathtaking, and I thought he had a really good shot at creating the next generation of procedurally generated space simulations. I don't know enough about everything that's happened since 2012 to have any meaningful takeaways in this particular case, but I have the utmost respect for Josh. He's clearly driven by a serious work ethic boosted by a lot of passion and talent, and I'm sure he will come away from 6 years of work on Limit Theory having a better understanding about how to apply his energies in his next projects to maximize the chance of success. There was a lot about Limit Theory that seemed to be done right. If I had to critique it in terms of things I noticed that gave me pause, I would say the lack of a publicly available playable demo (release early, release often) makes the list.
This made HN today (via Troy Hunt's post). I have to say, I've been happy with it for years in my home, since not everyone in my family cares about their browsing speed or privacy as much as I do. Even visitors in my home get the benefits of it without having to configure a thing.
CrossCode is an action RPG written in ImpactJS that has superb performance even on modest hardware, is cross-platform, and is available DRM-free from GOG. I've been playing early versions of it over the past couple of years and have had a ton of fun. It's got a bunch of great artwork, interesting characters, a novel world, and satisfyingly challenging gameplay that blends puzzles and action beautifully. It's 1.0 release is today, and I'm very much looking forward to starting anew with Lea (the protagonist) to see how the final version of the world improves upon the version I know from 2017.
andOTP is the my favorite 2FA app for Android. It is a security product, so being open source is a huge bonus. In addition, it integrates with OpenKeychain to sign and encrypt OTP backups, so that when you get a new device, you don't have to reset every single 2FA key. Some proprietary apps support this via Cloud sync, but then your 2FA codes are sitting on infrastructure you don't control, so I think andOTP's approach is better. On newer versions of Android, it integrates with the fingerprint reader to authenticate on startup, which is a nice touch as well.
GitHub - andOTP/andOTP: Open source two-factor authentication for Android
I'm wearing this shirt right now that says "The content of this shirt is no longer available due to a copyright claim." I bought it as a silly reminder that the efforts to strengthen copyright restrictions are deeply misguided; they always have unintended, negative side effects, and do little to achieve their stated aim. The internet is good at spreading ideas, and folks don't seem to cope well with that. This means that, as long as we keep investing in the internet, copyright will continue to weaken. It means that advertising will be more powerful. It means that people will spread misinformation, hoping that others will be misguided by it. Our ability to progress as a society will be governed by our ability to harness these trends for good, rather than fight them. If you find yourself trying to legislate away technological progress, you're probably on the wrong side of history.
package.el
with Melpa has been my go-to for years, and I didn't much care when they expelled all EmacsWiki packages from Melpa for security reasons. That did leave all of Drew Adams' work inaccessible to me, though. After trying Quelpa and running into bugs relating to errors during temporary tarball generation, I decided that straight.el might be worth considering. It is very opinionated, in that it encourages you to abandon package.el entirely, which I haven't tried yet, but I have been able to pull in the Emacsmirror package for bookmark+ successfully. I'm also somewhat interested in the idea of going back to git repositories for tracking emacs packages. It's the approach I used before package.el came around, and it makes it easy to run your own infrastructure for your own packages. I recently rewrote my Emacs config in a more modular format, and it wouldn't be hard for me to host that repository myself and simply pull it in as an Emacs package like any other. This approach also had the side benefit that in doesn't rely on a single server (Melpa) for all packages.
raxod502/straight.el: Next-generation, purely functional package manager for the Emacs hacker
I've been increasingly working on serverless systems, and the hardest practice for me to switch to in this list is that a function should do only one thing. Everything I know about software tells me this is true, but I haven't yet formulated a system I like for managing all those functions. They each need (git) version control, along with continuous test and deploy logic, and support for rapid rollback if a deploy goes wrong. Managing that all from a single repository will require a system for managing it, and using hundreds (or thousands) of git repositories seems daunting. I'm sure this has been solved, but I haven't found it yet.
I absolutely adore Thumper. I played it quite a bit on PSVR, and I'm absolutely thrilled it will be on the Go, which is currently my favorite headset simply due to portability and ease of setup.
This terrifies me, to be honest. BetterSlack is (was?) a chrome extension that modifies a web page. There are lots of these, and many of them are very useful. But what Slack is saying in their C&D seems crazy:
In order to remedy this, we ask that you please modify your product so that you are not forcing your own code into our services.
They claim that they have a TOS that forbids this, and I believe them. But do they have any legal basis for trying to shut down an author from distributing an original program? After a moment's thought, I doubt they do. But I'm not sure how the courts will see it. Blizzard had quite a bit of luck going after the author of a program that helped users cheat at WoW, but I think they had to demonstrate that the program hurt their business. I can't imagine BetterSlack hurts Slack's business.
I do think the author should change the name, though. Best to not mess with trademarks.
Bookmark+ is a great example of a feature (bookmarking things in Emacs) that seems simple, but admits a huge amount of thought and subtlety. Once the fundamental abstraction is nailed down, a huge amount of functionality emerges: bookmarks of URLs working in the same way as bookmarks of files, buffers, and lisp functions. In some ways, it reminds me of abo-abo's Hydra, except instead of proceeding from the abstract (hydras are functions providing an interface from which yet more functions can be dispatched) to the concrete (here's your menu for spell checking), it starts with something concrete (bookmarks), defines a unifying abstraction around it, and then turns everything into that abstraction, much the way Unix treats everything as a file.
I've been really interested in some rules-light table top games recently, and my love for cyberpunk make The Sprawl just so appealing to me.
I haven't even played it yet (though I hope to soon!), but the focus an narrative ('the fiction' as Ardens calls it) is extremely compelling. I never derived great joy from the minutia of precise maps and exact distances in D&D...but The Sprawl feels Just Right™.
I really like this idea, but it seems like it reduces key handling to a problem of trusting a single entity to handle signing the manifest of keys. I'd love to see GPG used more in organizations, but I'm not really sure how I'd roll this out in my own company, simply because the first question becomes "Who is the person I trust to sign key manifests for this company?". Perhaps I'm not fully understanding how it is implemented.