A little over a year ago, I wrote a bit about the bet about when VR headsets would sell more than 10M units in a year. The original bet was made in 2016 about whether this would happen in 2019. It clearly did not. While I haven't yet pulled industry-wide numbers, The Verge reported in November 2021 that the Quest 2 alone has sold 10M units since it launched in Oct 2020, about 13 months prior to The Verge's report. I'd say it's pretty safe to say the 2021 was the year VR headsets sold more than 10M units.
We're a few decades into the internet now. One of the most interesting questions to me about the internet is how jurisdictions work. Since the internet is a sort of 'overlay' on the physical world, do borders extend into the overlay, or are they ignored? There isn't a checkpoint or customs like there is when you land in another country and need to present paperwork. So how do we determine what jurisdiction a site can be sued in?
This ruling brings us one step closer to being able to figure it out, and I think it brings us closer to the maybe-naive answer: the jurisdiction is determined by where the servers are. Now, it doesn't go that far, or even really mention servers. But it does seem to conclude that just because a site is visible in a place does not mean the site can be sued there.
Very brief background: a citizen of Texas wants to sue the Huffington Post for libel. But before that can proceed, the court needs to figure out if Texas is the right place for the trial. Eric Goldman's writeup is worth the full read, but in short, the majority finds that simply because a website is accessible to people in a given jurisdiction (in this case, Texas), and even if that website advertises to people in that jurisdiction, it does not mean the website can be sued in that jurisdiction. This approach seems reasonable to me, though it really seems to only apply to sites hosting speech, like blogs or news. There is some language in the majority opinion that suggests if the site had more firm ties to Texas, they might have found differently:
...but its story about Johnson has no ties to Texas. The story does not mention Texas. It recounts a meeting that took place outside Texas, and it used no Texan sources.
But at least in the case of speech otherwise not tied to a particular jurisdiction, I think this ruling makes a ton of sense; I'm not even sure how one would implement the dissenting opinion, which Eric summarizes:
So the dissent apparently is fine with HuffPost being sued anywhere it's geolocating ads, which is likely everywhere. In other words, the dissent would honor the plaintiff's choice of forum. I think the majority reaches the better result.
I've had an interest in distributed computing for a while now, but the user experience is always less streamlined than centralized services. I've never been able to sell peers on the "freedom" aspect or the "no central control" aspect. Because of this, more and more of our everyday lives are taking place over networks that are owned by advertising companies (Google, Facebook, Twitter) whose only real advantage is convenience.
This came to a head this month as the advertising companies decided they didn't want to be a conduit for certain kinds of discussion. This is a completely obvious outcome, and could have been anticipated years ago by anyone with a passing knowledge of software and the business models of these companies. The fact that the U.S. government relies on these companies to communicate with citizens is just wild.
Moderation is a very tough problem for any centralized platform because it's hard to moderate well at scale because moderation isn't a one-size-fits-all problem. Some kinds of speech is fine in certain contexts, but totally inappropriate in others. Centralized moderation is bad at accounting for this, and tends to create moderation policies that lack nuance.
Planetary is a piece of software I've never used! But I am on Secure Scuttlebutt (ID
@iOOGrbvjXS1YAQWkL/eBy2UOAzUhQGRRG3p5IBFcnLQ=.ed25519) and I love Planetary's mission: make Scuttlebutt more usable for non-technical users. I use a client called Patchwork and uses a Go version of Scuttlebutt called Go-SSB what client you use, you can interact with the network in very similar ways.
So what makes Secure Scuttlebutt different from pretty much everything else?
If this sort of thing is interesting to you, give it a look!
The bet in question is when VR will sell more than 10 million units in a single year. That's the definition of "take off" in this context. As a guy who was writing "VR is here!" in 2012, it's a little bit wild to me that, at the close of 2020, we're still having this argument about whether VR is successful: I've spent about $1000 over the past 5 years on three different headsets (PSVR - $400, Oculus Go - $200, Oculus Quest 1 - $400) and I've had incredible experiences with all three. Farpoint, Doom VFR, and Skyrim VR on PSVR were incredible experiences, the Oculus Go gave me Darknet, BigScreen, and Virtual Virtual Reality, and the Quest has changed my life during the COVID-19 shutdown: Beat Saber, Pistol Whip, and Synth Riders have become a way to stay active even when I'm spending more time indoors, and games like Walking Dead: Saints and Sinners give me an immersive world to explore with a nice narrative. If VR disappeared off the face of the earth today, I'd call the whole endeavor a success. I've thoroughly enjoyed my time in VR.
So it's interesting to me that in the most technical online forum I frequent, there's this air of doubt about the whole thing. All kinds of arguments are brought out: VR offers nothing new, it's not immersive, most people don't need it, and that it's taking too long to have an 'iPhone moment'.
That last one gave me pause. How many technologies ever have an 'iPhone moment'? I'm not sure exactly what that means, but if the bar for "taking off" is that every adult goes out and buys one, I fear most technologies never have an 'iPhone moment'. Here's the thing: the iPhone wasn't revolutionary. It brought a bunch of really new and cool stuff to a device that everyone already owned: the cell phone. So consumers were already on a treadmill of upgrading their cell phone very few years, and when the time was up on their current model, they had an option to get an iPhone. The cost factor was mitigated by a built-in financing system offered by cell carriers (adding monthly charges to an existing bill that cover device cost). It's a good device in many ways, so it sold well. This is a crazy bar to set for most other technologies, especially those that have no immediate precursor.
VR has exactly that issue: no one owns "some other headset" that they'll replace with a VR headset. It's a totally new activity, is often perceived as extremely expensive, and is often associated with a dedicated room and lots of auxiliary equipment like beacons set up around the room. All this has changed now that inside-out tracking is developed, and costs have dropped 10x in the past few years.
Is there anything else that was like this? It reminded me a little bit of how my family reacted when my dad bought a Kaypro II CP/M machine in 1984. No one knew what it was good for, even though it did a few neat things. So I looked back at the sales of PCs through the 80s in Ars Technica's 2005 retrospective on computing. In 1984, Apple sold about a million units, and it decreased from there, selling only about 350,000 units by 1987. Even the juggernaut, IBM PC clones, sold 6 million units in 1987, growing from 2 million units in 1984. It wasn't until 1988 that all PCs clones combined surpassed 10 million units shipped per year, largely driven by business, and the fact that all the different models were largely compatible with each other (the origin of the term "IBM compatible"!) allowing competition to thrive.
After reading and thinking about that, my takeaways are:
I disagree with how Greg cast this:
So what went wrong? Looking back at VR hype in 2016, there were a lot of reasons to be optimistic..
But do I think there are milestones that VR needs to cross before it can become more mainstream. There need to be multiple cheap standalone models that users can choose from without losing a library of apps they've invested in. Proprietary app stores are now absolutely entrenched (for better or for worse), so I expect the way to approach this is for vendors to incentivize cross-buy. It may be a small hit to store exclusives, but given that we're concerned about the growth of the VR platform as a whole, this may be a case where if vendors incentivize cross-buy, they'll end up getting a slightly smaller portion of a much larger pie.
VR also needs to get higher resolution displays (this is happening) so reading text in VR is natural and easy. This will allow more business uses, particularly in a pandemic-stricken world. We have some apps that do this, but unfortunately most are not yet better than their real-world equivalent in terms of convenience, and cross-platform support is lacking. I use Linux almost entirely, with the exception of a Mac I have for work, and Virtual Desktop, which is what I would normally have bought in a heartbeat, only supports Windows. This makes perfect sense, as it's a single-developer project and Windows has more business use than other platforms, but as long as reading text is a challenge, business use will still be a tough sell.
In the meantime, I'll continue using my headsets daily and enjoying the remarkable experiences they bring.
It's not totally unreasonable for a company to unify login under one account. In the case of Facebook and Oculus, however, I don't think it makes sense from a customer perspective. In particular, I'm very concerned with the "black bag" treatment users get when they are banned from the platform. They are often unable to get any information on why they are banned, there is no appeal, and they lose access to hundreds or thousands of dollars of hardware and software they purchased. I don't think any customer would agree to such a setup if they had any choice, but they really don't: the contracts are not something a customer can negotiate, and the Quest/Quest2 are not products with any real competitor right now. Facebook and Oculus are not alone in this approach, and I think it's just as problematic elsewhere (Google comes to mind).
I was lamenting lately that in 2005, dominant laptop manufacturers made laptops with replaceable hard drives, memory, and batteries, but today, most are sealed and can't be upgraded. This design limits their usable life in a sort of "weakest link" sense: the component with the shortest life determines the life of the device. This bothers me not only because of the cost and waste, but also because if you turn around to buy a replacement, the manufacturer will probably try to sell an "improved" version, even in cases where no improvements were needed.
So when I saw DevTerm, I was intrigued. It's true that it isn't a real laptop replacement, but in the category of "portable utility computer", it comes really close. The batteries are standard 18650 cells, and all storage is replaceable. As far as I can tell, it doesn't even have the option to plug it into the wall: the batteries are charged by removing them and using an external charger. This has the nice side-effect that one could carry a couple of extra sets of batteries to extend its usable time.
I was somewhat saddened that the discussion on HN dominated by complaints about ergonomics. I feel these concerns are mitigated somewhat by the nature of the device: it's not likely that someone will use this the same way one would a laptop (many hours a day, every day). There was no discussion of the design in terms of longevity: open source design with replaceable components.
As many have noticed, it is quite expensive and unergonomic. Their selling point is entertainment, nostalgia and cyberpunk feel, not really a useful tool.
I think I disagree, but I'm not really sure, since I'm a sucker for computing nostalgia and cyberpunk aesthetics. I see the flat design as a unique feature that doesn't put a screen between me and whatever is in front of me. I love the idea of a tablet computer, but as an Emacs user, the lack of a keyboard is an ongoing source of frustration. This device remedies that, so I view DevTerm a bit like the tablet computer I always wanted: leave it running Emacs all the time, and use it to code, take notes, ssh into my other machines, all less intrusively than a laptop would be.
The device isn't available yet (the site says 2021), so much of this is my speculation, but I'm intrigued nevertheless.
My old Google WiFi router from TP-LINK ended up triggering some kind of kernel bug in Linux and tanking network performance by causing the NIC to constantly reassociate with the router every few seconds. After discussing with System 76 engineers, I decided to try a new router to perform a controlled test, and I didn't like that the Google router had no web interface. So I researched and picked up the RT2600ac from Synology. It not only resolved the problem, but provided better coverage to our back room than Google WiFi mesh did. It also had more advanced configuration, providing site filtering and safe search on a profile-specific basis. Profiles are essentially groups of devices. It worked well for us, but recently we started having debilitating outages where latencies would spike 400% and packet loss was close to 80%. I spent days troubleshooting, looking at Comcast outage maps, and talking with Comcast support. Their ultimate resolution was for me to buy a new cable modem, citing my current model's age (6 years) as past EOL. After I got off the phone with them, I decided to check the performance coming directly out of the cable modem. Sure enough, it was fantastic, which made the router the prime suspect. After some testing, it turns out that the safe search features incurred this performance penalty at times. I've since disabled them and performance is back to previous levels. Overall, the router is superb, but the safe search features don't seem to be totally nailed down.
One of my favorite features on the Switch is the ability to, at any moment, capture the last 30 seconds of gameplay and save it. It's a bit of a trick because of the performance implications of constantly capturing gameplay at a decent framerate, but Replay Sorcery manages to pull this off using JPEG framecaps stored in a memory-backed ring-buffer. Very glad to have an open source tool for instant replays!
A few highlights worth mentioning:
There's also some fun stuff!
I'm not sure how much I'll attend live, but I'll definitely watch all the talks I miss from the recorded streams. Exciting!
This is huge! I somehow had totally missed that Solid was this far along, and that Bruce was working on it. I have a good friend who worked at a startup, Singly, which, though now-defunct, was aimed at storing data in a way not so different than Solid. Here's how TechCrunch wrote of it in 2012:
The company began its life as The Locker Project, which would capture data from a user’s online activities (e.g. tweets, photos, checkins, etc.) and archive those items in a storage locker of sorts. Those efforts continue as an open source project, but Singly as it stands today is the commercial result of the problems solved while building The Locker Project. With the commercial launch, the company will offer the “app fabric” for $99/month for up to 1 million users, handling authentication, friend-finding, social sharing, and the like. Meanwhile, pricing is available upon demand for those with more complex data needs (aka the “data fabric,” as they call it), including syncing, storage, filtering, de-duping, intelligent indexing and more.
And here's an excerpt from the Inrupt press release about Solid:
The idea behind Solid is both simple and extraordinarily powerful. Your data lives in a pod that is controlled by you. Data generated by your things — your computer, your phone, your IoT whatever — is written to your pod. You authorize granular access to that pod to whoever you want for whatever reason you want. Your data is no longer in a bazillion places on the Internet, controlled by you-have-no-idea-who. It’s yours. If you want your insurance company to have access to your fitness data, you grant it through your pod. If you want your friends to have access to your vacation photos, you grant it through your pod. If you want your thermostat to share data with your air conditioner, you give both of them access through your pod.
Pretty cool development to the issue of data privacy...I hope this takes off!
A couple of days ago, I wrote about my thoughts regarding Apple's trajectory with M1. I didn't mention Big Sur, but that's part of the same pattern: Apple is going to continue to tighten the screws to prevent customers from running software of their choosing, all under the banner of security. This post discusses both, but was prompted by Apple's OCSP servers getting overloaded yesterday, which resulted in everyone finally realizing that Apple computers request permission to run programs from Apple every time they are run. The post sums up my thoughts quite well, so I won't reiterate here beyond this summary from the post itself:
The day that Stallman and Doctorow have been warning us about has arrived this week. It’s been a slow and gradual process, but we are finally here. You will receive no further alerts.
I've been looking for a planning tool for a small team for some time. I haven't even tried this, but I just love the fact that a game developer wrote this over 14 months because of a vacuum of planning tools that aren't cloud-oriented enterprise SaaS. It's not open source, but you can build it for your own use and it runs entirely offline, with its datastore in plain text (JSON) so it can be version controlled. This is completely up my alley! If you're interested in a pre-compiled binary, the author has made releases available on itch.io for $15. Very cool!
I was a huge fan of Apple from 2007 to 2011. In 2011, I started to get this vibe from them that they wanted to make iPad and Mac the same. I think it was a subtle change in how application status was reflected in the dock. The upgrade to Lion removed a light in the dock below applications that were "running" rather than simply "pinned". I sometimes like to manage resources myself by killing applications and starting others. Apple's response to criticism on this change was essentially "You don't need to know if an application is running or not." I found this disconcerting, and discovered I fell right back into my old Linux habits without much trouble at all. I essentially eliminated all Apple products from my household because of this decision.
It seems like an irrationally strong reaction to such a small change. And I think it was! My thinking at the time was that Apple was trying to shift the Overton window towards having desktops start to follow a mobile computing paradigm, starting with process management. I see this as a risk because desktops are a last bastion of relatively free computing: open platforms that can developed for, forked, and improved without paying any fees, having to agree to a Terms of Service or End User License Agreement. But Apple had started to show signs of bringing the App Store to desktops in 2010, though that App Store was distinct from the iOS App Store. I had originally considered its launch Apple's attempt to replicate the success of the App Store on iOS.
But as OS X removed this light from the dock, I saw this trajectory whose end goal is likely to be total vertical integration, a fortress of technology that is completely controlled inside its borders. Some see this as a good trade for potential security benefits. I see it as a poor trade for the freedom it removes. So I left.
This week's announcement of the M1 is another step. M1 machines will run iOS apps natively, even when they have not been customized to do so by the developer. This change, combined with Catalina's "phone home for every executable" and increasingly-arduous hoops to jump through to run non-approved apps suggests Apple's strategy is to make one App Store for all devices, take their 30% cut of all purchases, and remove or allow software at will. Such a system would make computing very sad for me. It would be a big loss.
Naturally, Apple represents just one facet of our computing future, and the scope of that universe is constantly expanding. But their decisions set trends, inspiring followers to propagate them widely. And while I don't mind having Apple around as an option for people, I wouldn't want their computing model to dominate. I think a lot of the future hackers and makers are made by growing up around open technology and playing with it. A world filled with closed devices would really suppress this exploration, I think. I don't think anyone wins in that scenario.
Google Photos is one of their most amazing products...I've been a very happy user for years. It's so good, it's just the most obvious choice for any casual photographer. No crazy upsells, great performance, good feature set.
I recently realized that a lot of my favorite photos are backed up to Google and...that's it! Folks get their accounts closed with no explanation and no recourse, though. So I used Google Takeout to try and get a copy of my photos for safe-keeping. The request took two days to fulfill, and resulted in 39 archives, each 2GB, with no reasonable way to automate the download. The system has given me 7 days to download all 39 2GB archives before they are deleted. The process is time-consuming, so I have 1 day left and I still have to download the final 18 archives. Wednesday night project!
But Google is also starting to paywall some features of Photos, so I'm sensing this is the beginning of the end for the greatness of Photos, as new photos will start counting against storage quotas.
Google's effort to monetize photos makes complete sense, and their strategy appears to be to incentivize purchases of Google One, which will include more storage and also unlock more features. I'm now hoping that I can sync photos to my Synology and/or NextCloud accounts instead. It's not so much that I mind paying, it's that Google likes to change the ground rules after you've joined, which means I'm forced to adapt on their schedule. But I like to think of myself as the customer, where Google should be adapting to my needs, not the other way around.
The issue here is that new content distribution mechanisms don't serve all the customers, so customers write tools to solve their problem. But the way copyright law is written, those tools are illegal. Distributing and using them was made criminal by the DMCA. Promoting science and the useful arts isn't just about money, it's also about providing access to the works.
I could watch Leonard discuss this stuff all day...it's just so interesting to me how the law intersects with technology, and how that plays out in practice.
Leonard discusses a number of interesting topics in this video. He reviews relevant parts of the text of the laws themselves, but also dives into the relevant case law, which is often overlooked in discussions with non-legal folks trying to reason about how to interpret the law.
As a programmer, I often want to argue that tools like
youtube-dl are legal. I feel like the should be totally legal...streaming content comes and goes for lots of reasons, and being able to timeshift it to watch on a train or airplane feels a lot like the behavior that was determined to be legal in the Betamax decision from 1984. In this mindset, it's tempting to argue that
youtube-dl is totally legal, and it's upsetting when folks come along and assert that using it is a crime.
But the sad truth is that the DMCA is really badly written, and criminalizes lots of behavior that many people would think is completely reasonable (like using
youtube-dl to download videos from YouTube that are Creative Commons licensed). Understanding the current state of the law is a first step towards figuring out how to improve it.
I have a friend that worked at Google for some years. Over lunch, I mentioned my frustration with how they cancel products, and he mentioned that Google's poor behavior is limited to free products, but once you pay they are better. This story is a nice counter-example, as I expect Nest Secure wasn't free or even particularly cheap.
It's sort of neat that Tomb is a zsh script leveraging dm-crypt under the hood. It's a shame that it is Linux-only, but I use Linux enough for most things that it's a convenient way to carry around encrypted containers on e.g. a USB drive, so if you lose the drive, at least your data isn't compromised.
I worked at Amazon Music when Google Music launched, and I was blown away by how good it was. I thought for sure that Amazon would give up on their music store before Google did. Fast forward 9 years, and I'm listening to Amazon Music while I write this.
In the intervening years, I've learned to avoid Google products as much as possible, both in my personal and my professional life, simply because you can't count on them. Google is like this genius uncle that always has a new contraption that amazes you, but can never show up on time, and when he finally does show up, he can't even remember what cool contraption you're talking about when you ask if you can play with it again. It's neat the first couple of times, and then you get frustrated and move on with your life, focusing instead on projects you can count on.
I ran across this in a discussion on HN of sites that carry high-quality public domain ebooks. I value archival-quality material highly, from webpage snapshots to books to videos and music. Having a site that focuses so much on quality and polish is a great addition to the giant archives that drive volume like Gutenberg and Archive.org.
Fennel is remarkable for a few reasons. I'm a big fan of Phil Hagelberg's work in general, and Fennel is no different. It's a tiny library that allows lisp programming in Lua, and when coupled with LuaJIT, provides a tiny, fast, portable scripting environment.
In particular, I'm impressed by Fennel because it is focused on limiting runtime cost vs. Lua and permits compilation of Fennel to Lua via
--compile, which is fantastic in retaining the slim dependency tree I love about Lua scripts.
After years on Google Reader, and then TT-RSS, and then basic python script I threw together, I've spent the last year or so on FreshRSS. It requires a DB, but sqlite works beautifully, making installation trivial. It's been very stable and provides a great "river of news" experience.
RSS in general acts as my "information nervous system", collecting up data from across the net and providing a way for me to filter, read, and share it with external tools. This site is powered by Shaarli, which is a bookmarking tool. What's cool is that FreshRSS can dispatch out to Shaarli directly, so as I'm reading through feeds, I can hit a button to share the link and my thoughts on it. Together, they are enough for a basic social network, since Shaarli sites also produce RSS feeds that can be consumed by others' RSS readers.
I'd like to see this more broadly adopted! If you're interested in democratizing social networks, it's a very nice approach that works right now.
Shaarli is a great piece of software, but does seem to lack a bit in the variety of themes available for it. After upgrading, I found that the theme I'd installed that was a clone of the Delicio.us theme actually caused the site to 500, so I went looking for an alternative. This theme looks serious! Lots of commits, and based on the screenshots, it's very faithfully sticking to material design.
Sadly, while I've installed it on this instance, I'm still getting errors when I try to enable it, so I'll have to dig in and see if that's something with my instance or if I should look at updating the theme. This instance is using version 0.12.0 of Shaarli, but the latest release for the theme is 0.11.0.
A good friend sent this my way and I enjoyed it immensely. A really cool take on the Turing Test.
I think back to 2005, and wonder how cool it would be if I had a folder fully of web page snapshots showing not only what I was interested in back then, but also what the web looked like then. Even though many of the blogs I read back then are likely long gone, if I had an archive that showed the pages the same way they existed, I could still browse through the same stuff today that I thought was useful or interesting back then.
That's what SingleFile does. Under the hood, it uses data URIs to capture page assets and encode them into the text of the page itself, so the resulting file, while as large as the sum of the assets it contains, is a stand-alone artifact that can be opened in any browser without an plugin at all. In my testing so far, it works extremely well.
There's an automatic archive function I haven't played with. I wonder the feasibility of indexing the archive files and layer search on top of the archive? Those two features together would be quite potent.
I have an interest in personal information management, and as a heavy user of Emacs, it's hard for me to fairly evaluate software outside of Emacs, simply because I'm so used to everything Emacs provides. I tend to discount other programs because they lack some feature I'm used to in Emacs.
But with Joplin, the synchronization through NextCloud, good client support for Android, Linux, and MacOS, and end-to-end encryption, and the ability to instantly edit my notes in an external editor ('emacsclient'!) really make it my dream system for notes.
I wish this were the curriculum I learned from. I reason intuitively about most of the math I do daily, so learning symbolically has always been a reach. Better Explained presents excellent intuitive explanations for various concepts that make me feel more like I understand the dynamics of the system I'm working on.
Fabien's explanations are always a joy to read. I'm a huge fan of his "Game Engine Black Book" series, and I had no idea he did smaller analyses like this one, which explores the structure and workings of Andrew Kensler's famous "business card ray tracer".
I remember back in 2007-2010 I gave a lot of thought about how to share data securely between my Mac (I was just getting into Mac back in 2007, around when they switched to Intel) and my "legacy" Linux machines (I ended up dumping Mac in 2011, so keeping Linux in the loop turned out to be a good idea). Inevitably, TrueCrypt came up because there are simply so few cross-platform on-the-fly encryption systems. But after 2014's very strange shutdown of the TrueCrypt project, as well as the strange licensing situation around TrueCrypt, I expected to see a bunch of alternatives emerge (sort of like happened with RSS after Google shut down Google Reader). But it doesn't seem like this happened. Now in 2019, when I search for cross-platform disk encryption, VeraCrypt comes up, which is essentially a continuation of TrueCrypt, and everything else is either barely maintained, only for one platform, or closed source. This surprises me somewhat, but given how much work goes into creating an maintaining advanced security software, perhaps I shouldn't be!
I've only read about 20% of this, but so far I'm extremely impressed at both the quality of the observations (package managers are as much a human problem as they are a technical one) and at how well-structured the ideas are.
Skinning was all the rage back in the '90s. I remember installing LiteStep (which seems to still be around!) on Windows 98 to get a custom look-and-feel. Winamp3 was all about crazy skins, and theming in general was just part of the hobby that was using a computer. Fast forward 20 years, and the newest feature I see in every app and operating system is 'Dark Mode'. I think the change is partly because we became more thoughtful about the security impact that cosmetic changes could have. As with all rules, there are valid cases that get prohibited, and developing a consistent, high-quality dark theme for Firefox is of them. Luckily, ShadowFox adopts an unorthodox approach that side-steps some of those security measures by distributing itself as a script, rather than a browser addon. This is clever, and result is quite beautiful to my eye, particularly the consistency across the 'protected' pages like 'about:addons' and 'about:profile'. I'm very happy some folks took the time to work on it!
I was skeptical that Delta Chat could work well. It was nevertheless attractive because of its 'self-hosted' nature, so I installed it on my family's devices and set up some accounts. Onboarding needs to be smoother, but it does work. Account detail sync seems flaky, so my family can't see my profile photo, even though I set one.
That said, Delta Chat is amazing. It uses plain old email over IMAP connections to work its magic, and it uses end-to-end encryption to protect the content of messages. Photos work great, and it also supports audio messages, which is a lot of fun. I highly recommend trying it out.
I really enjoy almost all of Sam Altman's writing, and this is no exception. His insight about compounding yourself is a real challenge, since there aren't that many ways to do it (though he says the opposite!) He lists a few: capital, technology, brand, network effects, and managing people. As a company, I understand this list, but as an individual, I'm limited to technology and managing people (there's an argument that network effects applies as well). Interestingly, these are the two areas I've focused on most in the past 10 years or so, so there's at least some alignment there.
John Carmack has always been an inspiration to me, and he seems to be thinking a fair bit about exactly what technology will lead to that sort of compounding:
I struggle with internal arguments around how much pursuing various new things "compounds" on my current base. I hate to say "not worth my time to learn that", but on the other hand, there is a vast menu of interesting things to learn, and some are worth more than others.
There are some obvious guidelines. One is to focus on technologies that are what I call "prime abstractions": they can seem uselessly abstract, but once you learn them, they have broad applicability across both discipline and time. These technologies are often the least attractive, especially to newcomers, since they tend to not lead to immediate results. Buckling down and learning procedural, object-oriented, and functional programming idioms is like this. Even learning SQL well has some of these elements. I was lucky that I picked up Linux back in 1997 as my primary computing environment: it has endured beyond my greatest expectations, and has had a huge compounding effect in my work for the past 20 years.
But then there's the harder question: given what I currently know, what should I learn next to either compound my effectiveness, or set myself up to learn the next thing that will compound my effectiveness? For me, it's been difficult to figure that out in the realm of technology: there's a ton of stuff to get interested in, and it is sometimes hard to discern which will be most useful when coupled with my existing knowledge. But, as an engineer focused on technology for so many years, I think it was a slam dunk for me to start focusing effective methods of working with others. Not just management (which is world unto itself), but working with peers, managers, executives, contractors, and other teams as well. I have many more years to continue improving in those areas, but I would like to start using some time to focus again on technology as well.
Seems like Chromium is shifting to remove an API, webRequest, in favor of a more limited version, declarativeNetRequest. In doing so, it is implicitly favoring the API of a more limited, commerical net blocking utility over more open-source, non-commerical rivals. Gorhill summarized the top-level effect of the change really well here, I think:
Extensions act on behalf of users, they add capabilities to a user agent, and deprecating the blocking ability of the webRequest API will essentially decrease the level of user agency in Chromium, to the benefit of web sites which obviously would be happy to have the last word in what resources their pages can fetch/execute/render.
I hear very little discussion about what a user agent really is, and about the right of users to manage, shape and ultimately control what code is downloaded and run on a computer that they have purchased. As Cory Doctorow concluded in perhaps my favorite of his many excellent works, Lockdown: The coming war on general-purpose computing
Freedom in the future will require us to have the capacity to monitor our devices and set meaningful policies for them; to examine and terminate the software processes that runs on them; and to maintain them as honest servants to our will, not as traitors and spies working for criminals, thugs, and control freaks.
My main gripe with mainstream devices is that they cater to short term goals: shiny, cool features that will be old and boring in 12 months, so you can throw away the device and buy a new one. The Pyra, on the other hand, is a device built to optimize for users, rather than the business. It has a high price, but it's made almost entirely in the EU, and it's modular, so components like the screen and port configuration can be swapped out while leaving the CPU and RAM intact, for example. It trades sleekness for power, choosing to run plain ol' Debian and offering a slew of USB, HDMI, and SD ports for expansion and customization.
For me, I expect it would be the perfect device to throw in a backpack that mostly can be used for gaming, but also can browse the web and even support lightweight programming. Best of all, it can run Emacs, so I'd never be without a comfy computing environment!
Intercooler builds on JQuery, and creates a declarative language for front-end code that informs API design that retains simplicity while still providing a fast, modern UX. Check it out for your next project!
There are a couple of cool tweaks that make it work well with Emacs, which has the superb markdown-mode by Jason Blevins.
First, even though the file names for markdeep should be .md.html, Emacs works best if editing is done in markdown-mode. This can be accomplished with simple header:
<!-- -*- mode: markdown; -*- -->
Second, you don't need to run markdown on the file to view it, but rather simply open it in a browser. It's easy to bring over HTML-mode's
C-c C-v binding to markdown-mode:
(define-key markdown-mode-map (kbd "C-c C-v") 'browse-url-of-buffer)
This allows a nice workflow of editing and previewing with no further adjustments, I've found.
What I've neglected to mention in previous discussions is that by putting rendering into the client, tools like Remark supply the end-user with the 'source code' for the presentation as a side-effect of allowing them to view the presentation, which makes the presentation a self-contained artifact. I think this approach has more merit than I originally gave it credit for, so I plan to give Remark a try in some upcoming decks and see how I think it compares.
This decision is the correct one, and it's good that it is being formally recognized as such. I am somewhat surprised it took this long, however. Copyright has little place preventing folks from using, maintaining, and repairing devices they legally purchased, regardless of whether the manufacturer is trying to prevent them from doing so. Cory Doctorow makes an excellent related point, I think:
They have this pretense that DRM is ‘effective’ but then they grant a ‘use exemption’ and assume that people will be able to bypass DRM to make the exempted uses, because they know DRM is a farce...The thing is that there's these two contradictory pretenses: 1. that DRM is an effective means of technical control, even in the absence of legal penalties for breaking it, and; 2. That once you remove the legal stricture on breaking DRM, it will not be hard to accomplish this.
I really like these guidelines, and I like that they encourage people to be better even if they aren't breaking any rules. RMS is thoughtful, as usual.
I've struggled for years with using cool packages like rspec-mode because I use tools like rbenv to manage Ruby versions. The problem is that rbenv relies heavily on shell customizations to work, and when Emacs spawns a subprocess, bash is run as neither a login shell nor as an interactive shell. If you carefully read the section of the bash manual about invocation, you'll find that neither .bashrc nor .bash_profile are sourced for shells that are neither interactive nor login. Sure, you can tell Emacs to use -i when it invokes bash, but this causes other functions that rely on rapid subprocess creation to behave strangely, or at least be quite a bit slower.
But I kept reading the manual, and it turns out the solution lies in the use of the
BASH_ENV environment variable, which, if specified, must point to a file that bash will happily source immediately after it is invoked. By creating a file that added
$HOME/.rbenv/shims to my path, I was then able to
(setenv "BASH_ENV" "/home/rpdillon/rbenvsetup.sh") in my initialization, I can now invoke rpsec from within Emacs on-demand, exactly as intended. And I also got to learn about
Carmack's talks continue to be the best possible use of 90 (or 120, or 180) minutes for me. His crisp, honest, well-grounded and well-articulated discussion around real-world work to push the technology envelope makes him worth listening to any time I get the chance.
It's super interesting to hear him discuss how computationally demanding VR is. The Oculus Quest was announced, which is a standalone 6DOF headset which is computationally roughly equivalent to previous gen consoles like PS3 and XBox 360. But, as he points out, those were rendering at 30 FPS at 1280x720, often with no MSAA. To support VR, the system needs to render more pixels faster: 1280x1280 times 2 (one image for each eye) at 72Hz. This means roughly 8x the number of pixels being pushed, with an additional tax to add on 4xMSAA.
I've been following Josh Parnell's work on Limit Theory since shortly after it was kickstarted. His early videos were simply breathtaking, and I thought he had a really good shot at creating the next generation of procedurally generated space simulations. I don't know enough about everything that's happened since 2012 to have any meaningful takeaways in this particular case, but I have the utmost respect for Josh. He's clearly driven by a serious work ethic boosted by a lot of passion and talent, and I'm sure he will come away from 6 years of work on Limit Theory having a better understanding about how to apply his energies in his next projects to maximize the chance of success. There was a lot about Limit Theory that seemed to be done right. If I had to critique it in terms of things I noticed that gave me pause, I would say the lack of a publicly available playable demo (release early, release often) makes the list.
This made HN today (via Troy Hunt's post). I have to say, I've been happy with it for years in my home, since not everyone in my family cares about their browsing speed or privacy as much as I do. Even visitors in my home get the benefits of it without having to configure a thing.
CrossCode is an action RPG written in ImpactJS that has superb performance even on modest hardware, is cross-platform, and is available DRM-free from GOG. I've been playing early versions of it over the past couple of years and have had a ton of fun. It's got a bunch of great artwork, interesting characters, a novel world, and satisfyingly challenging gameplay that blends puzzles and action beautifully. It's 1.0 release is today, and I'm very much looking forward to starting anew with Lea (the protagonist) to see how the final version of the world improves upon the version I know from 2017.
andOTP is the my favorite 2FA app for Android. It is a security product, so being open source is a huge bonus. In addition, it integrates with OpenKeychain to sign and encrypt OTP backups, so that when you get a new device, you don't have to reset every single 2FA key. Some proprietary apps support this via Cloud sync, but then your 2FA codes are sitting on infrastructure you don't control, so I think andOTP's approach is better. On newer versions of Android, it integrates with the fingerprint reader to authenticate on startup, which is a nice touch as well.
I'm wearing this shirt right now that says "The content of this shirt is no longer available due to a copyright claim." I bought it as a silly reminder that the efforts to strengthen copyright restrictions are deeply misguided; they always have unintended, negative side effects, and do little to achieve their stated aim. The internet is good at spreading ideas, and folks don't seem to cope well with that. This means that, as long as we keep investing in the internet, copyright will continue to weaken. It means that advertising will be more powerful. It means that people will spread misinformation, hoping that others will be misguided by it. Our ability to progress as a society will be governed by our ability to harness these trends for good, rather than fight them. If you find yourself trying to legislate away technological progress, you're probably on the wrong side of history.
package.el with Melpa has been my go-to for years, and I didn't much care when they expelled all EmacsWiki packages from Melpa for security reasons. That did leave all of Drew Adams' work inaccessible to me, though. After trying Quelpa and running into bugs relating to errors during temporary tarball generation, I decided that straight.el might be worth considering. It is very opinionated, in that it encourages you to abandon package.el entirely, which I haven't tried yet, but I have been able to pull in the Emacsmirror package for bookmark+ successfully. I'm also somewhat interested in the idea of going back to git repositories for tracking emacs packages. It's the approach I used before package.el came around, and it makes it easy to run your own infrastructure for your own packages. I recently rewrote my Emacs config in a more modular format, and it wouldn't be hard for me to host that repository myself and simply pull it in as an Emacs package like any other. This approach also had the side benefit that in doesn't rely on a single server (Melpa) for all packages.
I've been increasingly working on serverless systems, and the hardest practice for me to switch to in this list is that a function should do only one thing. Everything I know about software tells me this is true, but I haven't yet formulated a system I like for managing all those functions. They each need (git) version control, along with continuous test and deploy logic, and support for rapid rollback if a deploy goes wrong. Managing that all from a single repository will require a system for managing it, and using hundreds (or thousands) of git repositories seems daunting. I'm sure this has been solved, but I haven't found it yet.
I absolutely adore Thumper. I played it quite a bit on PSVR, and I'm absolutely thrilled it will be on the Go, which is currently my favorite headset simply due to portability and ease of setup.
This terrifies me, to be honest. BetterSlack is (was?) a chrome extension that modifies a web page. There are lots of these, and many of them are very useful. But what Slack is saying in their C&D seems crazy:
In order to remedy this, we ask that you please modify your product so that you are not forcing your own code into our services.
They claim that they have a TOS that forbids this, and I believe them. But do they have any legal basis for trying to shut down an author from distributing an original program? After a moment's thought, I doubt they do. But I'm not sure how the courts will see it. Blizzard had quite a bit of luck going after the author of a program that helped users cheat at WoW, but I think they had to demonstrate that the program hurt their business. I can't imagine BetterSlack hurts Slack's business.
I do think the author should change the name, though. Best to not mess with trademarks.
Bookmark+ is a great example of a feature (bookmarking things in Emacs) that seems simple, but admits a huge amount of thought and subtlety. Once the fundamental abstraction is nailed down, a huge amount of functionality emerges: bookmarks of URLs working in the same way as bookmarks of files, buffers, and lisp functions. In some ways, it reminds me of abo-abo's Hydra, except instead of proceeding from the abstract (hydras are functions providing an interface from which yet more functions can be dispatched) to the concrete (here's your menu for spell checking), it starts with something concrete (bookmarks), defines a unifying abstraction around it, and then turns everything into that abstraction, much the way Unix treats everything as a file.
I've been really interested in some rules-light table top games recently, and my love for cyberpunk make The Sprawl just so appealing to me.
I haven't even played it yet (though I hope to soon!), but the focus an narrative ('the fiction' as Ardens calls it) is extremely compelling. I never derived great joy from the minutia of precise maps and exact distances in D&D...but The Sprawl feels Just Right™.
I really like this idea, but it seems like it reduces key handling to a problem of trusting a single entity to handle signing the manifest of keys. I'd love to see GPG used more in organizations, but I'm not really sure how I'd roll this out in my own company, simply because the first question becomes "Who is the person I trust to sign key manifests for this company?". Perhaps I'm not fully understanding how it is implemented.