Bashing React

Bashing React

I’ve been a long-time casual detractor of React.js. The occasional snide remark here & there, or long rant in the internet that nobody ever reads.

The gist of my complaint is that it’s overly complicated — while at the same time not doing enough to help developers.

Other frameworks (like Angular) are complex, but they aim to provide a more complete solution in a coherent form. To be effect (read: develop something beyond a toy app) With React, you need a whole lot of other things with React that you have to piece together. Redux, React Router, etc. While with other frameworks (like Svelte) both radically simplify and improve performance but don’t have the pretension of solving complex developer problems.

Recently, I decided to get professional about bashing React. Not more professional in my behavior, but I wanted to go pro as an anti-React troll.

I decided I needed to learn more about it. To, you know, criticise it more effectively.

So I turned to ChatGPT to start asking clarifying questions. When I diss something, I like to really get my facts straight — usually after flying off the handle and getting them all wrong.

Do I ever publicly apologise or issue a retraction / correction?

HAHAHA.

I used to joke that React feels like it was written by AI.

See, even a couple years ago, when we (normal people) talked about AI, we meant machine learning analyzed uncategorized data and neural network pieced together an algorithm that develops a hueristic for determining if something is, in fact, a picture of a kitten.

(Apparently AI couldn’t figure out how to identify street lights, or crosswalks, or busses — and outsourced the job to us.)

Anyway, the algorithms developed by AI algorithms are often surprising and usually inscrutable. But they worked (for the most part) with a few surprisingly strange exceptions. Which a good reason to use AI to inform, but not make decisions.

What I was saying is that React feels like it was developed organically, without a plan, by a large group of people working independently and it coalesced into something that may be useful for developing web applications, but it is bizarrely confusing to someone looking at it from the outside, and doesn’t have a coherent organizing structure. In other words, it was a cobbled together complex system that came out the unique boiler-room of Facebook’s needs and evolved with additional layers of complexity as it was adopted and adapted by a broader community.

Compared to something like Angular, which has a clear design (even if it was designed by committee and is too rigid and too heavyweight for most projects) that is logical, and it’s largely a matter of taste and (need for complexity and structure) that determines if you want to use it.

I recently watched the React.js documentary:

And was going to say that I feel like I was right, except that React came out of the mind of one person, Jordan Walke. Which is not quite true, because it didn’t take off until a core team of collaborators and contributors and evangelists pitched in. So it wasn’t really a gradual organic design from a large organization, but was a more deliberate design led by a single visionary to replace the organic jumble of tools used by that large organization, Facebook.

In a rare fit of humility, I decided (before watching the documentary) that there must be a reason so many thousands of developers find React so compelling, beyond the fact that it’s promoted by Facebook.

And coincidentally, I was experimenting with building a website out of components using Svelte.js, and feeling quite productive, despite not doing any real development, other than building the (mostly static) site out of composable compents.

The header contains the logo, menu, and login components.
The menu takes the topics (About, Blog, Contact, etc) as data properties.
The login component has two states (logged in, not logged in) and affects the menu (whether admin menu is displayed).
Each page contains the header, main, and footer sections and so forth.

Being able to compose a site from components really helps with structuring your code. But I still didn’t understand all the additional complexity.

Being happy with my progress using Svelte, I thought it would be worthwhile trying React for this simple scenario. And possibly even try to learn a bit about Next.js for server side rendering a static site. The main case I want is to be able to pull content from other sources (CMS, database, or git repo) and render them into a static site, but be able to update page and sections of pages by editing plain text (or markdown) files. In other words — like
a blog.

Perhaps someday, someone will invent a useful tool for this.

So I started exploring React as a way to structure HTML. I’ve long suspected that majority of React fans (and users) use it only for this. The claim of React without JSX was disingenuous, because JSX is the primary reason for using React.

People say that React is easy to learn because has a simple interface, but that’s nonsense. React has the most complex interface of any UI framework. Things like componentDidUpdate and componentWillUnmount that only make sense at the most low level for the most complex interactions — and details that almost nobody knew existed in JavaScript at all. I submit that it was actually the complex low level hooks (that React has, but nothing else did) that made it popular for complex sites like Facebook where lots of things are happing below the surface that users aren’t even aware of.

And people talk about the Virtual DOM and how it improves performance. While that may be true for extremely complex DOM interactions (hundreds or thousands of elements changing), but for most sites and applications, it doesn’t. In fact, for most cases, the virtual DOM is slower than just finding and replacing real DOM nodes in place. React’s DOM diffing strategy is actually very basic, and doesn’t (or didn’t) handle things like a <div> tag changing to a <span> tag and everything underneath it stays the same. Or (earlier) adding or removing elements from a list.

While the concept of a virtual DOM was a technical marvel, the implementation left a lot to be desired — and came at a big cost, both congnitively, and code complexity wise, as well and framework size and performance. “Lightweight” frameworks like Preact were create to replace the supposed lighweight React.

Which get’s to the real complexity. React is pretty complex. But it’s benefit comes from not needing to understand the complexity hidden inside — until you need it. Most frameworks can’t compete, not in simplicity, but in being able to adapt to the real complexity when needed.

Reacts “simplicity” comes at the additional cost of React not having a an answer to state management, routing, server side data, and a bunch of other things that are needed for creating a real web “application”. These are all either “extras” or “roll your own”. And you bring in things like Redux, React-Router, Apollo GraphQL, etc to handle these. Which then makes you wonder why you’re using React at all in the first place.

Most people are happy using JSX to compose UIs, and occasionally dip in to properties for passing data, and gradually let the React “expert” handle the complexity of state, lifecycle hooks, and the rest. And that’s probably the Facebook use case. Someone is working on the HTML and CSS for a small widget (like advertisments, or like buttons) and need to 1) make sure it works consistently in all sorts of displays and device. And someone else handles the lifecycle and state interaction.

That means using React, by itself, to compose UIs, is a perfectly acceptable solution.

Except JSX isn’t a good solution. It can’t handle looping, because it can’t handle statements, so you need to use map() it iterate over arrays, and inspecting arrays of object that have arrays can become a nightmare in React, which is actually pretty easy with a straightforward templating solution.

JSX was a nightmare to use for the first few years, because that meant setting up Babel transpilers, Grunt/Gulp tasks, and Webpack, and configuration. Thankfully, create-react-app (created by Dan Abramov) made this simplier, and now tools (like VS Code) can detect JSX and render it properly. Debugging was another major challenge that has improved.

But, I didn’t come here to criticize React, even though I’ve been doing that for a long time. I came here to praise it. Or rather, to explain that it may have it’s uses, and I want to better understand them. And understand why the decisions made to make React work the way it does were made. For that, I need to understand them better myself.

Earlier, I talked about how React felt like it was inscrutably designed by an AI system.

Part of the appeal of React (and many complex modern technologies) is their complexity. It’s that they are hard to learn, and it is a puzzle to figure them out, and you feel like an initiate into a secret society once you finally have obtained your hard-won knowledge. And people want to be like the esoteric experts, so they adopt their fashions.

What I’ve found is that, while I have been unable to understand React by going through tutorials and looking at sample code, part of that is failing to realize that you don’t need to understand it all.

Most uses of React don’t need to use Redux, and most well designed apps won’t use React-Router. And the less shared state that an application has, the better (usually). So a lot of the complexity associated with React applications (that are outside of React itself), are actually unneccesary for the majority of use cases (rendering content and layout on the screen).

What really clicked for me (after spending some time doing due diligence giving React a fair shake) came after reading this post by Dan Abramov (the creator of Redux) a core React team member.

https://medium.com/@dan_abramov/you-might-not-need-redux-be46360cf367

Perhaps ironically (and perhaps not — depending on your opinion of Alanis Morriset or high school English teachers), the thing that really helped me to understand React, was that it was the AI system ChatGPT (which really just sythesizes existing content) that enabled me to understand (and appreciate) React better.

Maybe only an AI can explain a framework allegedly created by AI.

I’m going to be going through my interaction with ChatGPT as I learn, research, experiment, complain about, and use React. Look for more posts and videos. And if you’d like to study React with me, reach out.

Updating test results in QC using the QC OTA API explained

Yesterday I cleaned up and posted my example QCIntegration utility on GitHub.

While it works as a standalone tool, some people might not want to wade through the code to understand or modify it. So today, I’m going to try to explain how the OTA API works by recreating the steps as a blog post with explanation in a simple script.

I’ll start with an example using C# and then give an equivalent Python example. I’ll use the same scenario, updating test case results in QC, but if requested, I can also show how to get test steps from a test plan, or read & update defects in QC using the OTA library.

First, create a new project in Visual Studio (or SharpDevelop). You’ll need to add the OTAClient.dll as a reference. It is a COM library and contains the single interface TDConnection.

When searching for the library name it is called the “OTA COM Type Library”. The package is “TDApiOle80.” Since it is a COM library, it needs to use an interop for C#, but this is handled automatically by the IDE.

using TDAPIOLELib;
TDConnection conn = new TDConnection();

Now, let’s create a connection to your Quality Center server. You’ll need to know the URL of your QC Server and have valid login credentials with access to an existing Domain and Project.

Assuming you have quality center installed on your local machine (not a typical setup) you might have the following setup:

string qcUrl = "http://localhost:8080/qcbin";
string qcDomain = "oneshore";
string qcProject = "qa-site";
string qcLoginName = "aaron";
string qcPassword = "secret";

Note: I do not use this same password for my bank account

There are several ways to log in, but I’ll use the simplest here:

tdConn.InitConnectionEx(qcUrl);
tdConn.ConnectProjectEx(qcDomain, qcProject, qcLoginName, qcPassword);

Now you need to find your test sets that need updated. I typically use folder structure that goes something like:

Project – Iteration – Component – Feature

It’s a bit convoluted but here’s the code to get a testSet:

string testFolder = "Root\QASite\Sprint5\Dashboard\Recent Updates";
string testSet = "Recent Updates - New Defects Logged";

TestSetFactory tsFactory = (TestSetFactory)tdConn.TestSetFactory;
TestSetTreeManager tsTreeMgr = (TestSetTreeManager)tdConn.TestSetTreeManager;
TestSetFolder tsFolder = (TestSetFolder)tsTreeMgr.get_NodeByPath(testFolder);
List tsList = tsFolder.FindTestSets(testSetName, false, null);

The parameters for FindTestSets are a pattern to match, whether to match case, and a filter. Since I’m looking for a specific test set, I don’t bother with the other two parameters.
You could easily get a list of all test sets that haven’t been executed involving the recent updates feature by substituting this line:

List tsList = tsFolder.FindTestSets("recent updates", true, "status=No Run");

Now we want to loop through the test set and build a collection of tests to update. Note that we might have more than one test set in the folder and one or more subfolders as well:

foreach (TestSet testSet in tsList)
{
	TestSetFolder tsFolder = (TestSetFolder)testSet.TestSetFolder;
	TSTestFactory tsTestFactory = (TSTestFactory)testSet.TSTestFactory;
	List tsTestList = tsTestFactory.NewList("");

And finally, update each test case status:

    foreach (TSTest tsTest in tsTestList)
    {
        Run lastRun = (Run)tsTest.LastRun;

        // don't update test if it may have been modified by someone else
        if (lastRun == null)
        {
            RunFactory runFactory = (RunFactory)test.RunFactory;
            String date = DateTime.Now.ToString("yyyyMMddhhmmss");
            Run run = (Run)runFactory.AddItem("Run" + date);
            run.Status = "Pass";
            run.Post();
        }
    } // end loop of test cases

} // end outer loop of test sets

Of course you might want to add your actual test results. If you have a dictionary of test names and statuses, you can simply do this:

Dictionary testResults = new Dictionary();
testResults.Add("New defects in Recent Updates are red", "Pass");
testResults.Add("Resolved defects in Recent Updates are green", "Pass");
testResults.Add("Reopened defects in Recent Updates are bold", "Fail");

if (testResults.ContainsKey(tsTest.TestName))
{
    string status = testResults[tsTest.TestName];
    recordTestResult(tsTest, status);
}

That’s all for now. I’ll translate the example into Python tomorrow, but you’ll see it’s really quite straightforward.

The trouble with Blogs, Wikis, and Forums

There are a lot of great tools out there for blogging, wikis, and forums. Some of them even look nice and are (somewhat) friendly to use. I like wordpress, I like blogger, I like phpBB (except for the appearance), punBB, and others forum tools. I like wikis quite a lot. I’ve tried a lot of them recently.

But they’re all standalone. I got a googlewhack when I typed in “embeddable wiki” (not really, but practically the same.) It was a forum where someone was asking if there was an embeddable wiki. My definitive answer is “I guess not.”

That’s shocking. Does anyone not see the value in being able to create areas of content that are easily editable but are not the whole application?

Similarly, blogs and forums have this same problem. The prevailing philosophy tends to be to make your blog, wiki, or forum your whole site (you can usually make a theme for it) or build your site using an overarching product.

I was actually shocked that CMS apps don’t seem to have the idea of wiki, blog, forum, (and article) at the core.

The one exception is TikiWiki, which is quite cool, except for the code, presentation, and admin interface. It seems to have the right idea of what users want, but I couldn’t get past their admin interface, and didn’t really want to learn yet another templating and component mechanism (which by the way, seems to be done through their admin interface.) And it stores everything in the database! The clincher was that it’s hard to do pretty URLs (their rewrite setup is better than nothing, but not actually pretty) and the wiki editor isn’t actually that nice.

I may still end up using it for One Shore, but I really want the best wiki available, because I spend a lot of time in it.

Pivot seems interesting, in that it has a framework for file based blogging. That’s wordpress’s weakness. I want versioned files, not DB BLOBs.

Pluggable authentication is sometimes doable, but it’s always on some forum where a guy cut and pasted some code to get wiki X to use the same password database as blog Y or forum Z.

Like I said, the good ones are themeable (to a degree) and they have some of their own components, but what about my MVC or CMS components. Or how do I reference a blog post in a CMS?

Theming a blog is usually nicer than theming a CMS, but that might be because it does less. Theming forums is usually pretty limited. And theming a wiki is often a hack.

Most blogs (and some wikis) have comments, but not as fancy as forums. Thats okay, I think comments should be fairly simple.

Blogs, wikis, and forums all have in common that they’re web-editable blocks of text (preferably with limited markup). That means they’re files in my book. That also means that they should be components.

I want to include this valuable content on my site (not as my site). Something as simple as:

blogcomponent.display_post(blog_id, post_id, display_comments=no);

or

blogcomponent.standard_blog_layout(blog_id) or blogcomponent.rss_layout(current_month);

And files that can cross reference each other like this blog post:

[@title = You should see my wiki]
[@date = 20080303 094700 EST]
[@revision = 1.0]

Today I added a bunch of details about [this great tool | wiki:SuperFramework] in the wiki.

[Comments | forum:myblog:200803003-1_comments]

[@display_comments=no]

Of course, display comments should be in a configuration somewhere, and the comments link should probably be auto-generated.

And this should appear inside my site, without having to theme my blog (other than basic layout) or having to specify which menus or other components are displayed. The request would be something like:

http://my-site/blog/you-should-see-my-wiki

To edit or create a blog entry:

http://my-site/blog/new-post
http://my-site/blog/edit?post=20080303-1

The code for the blog page should be something like:

response.layout = site.content_layout #this includes all the menus, components, etc. in a normal content page

if ( request.inActions (“display_blog_post”) )

response.layout.add_component(components.content_component = blog.display_post(id, etc))

Of course this is bad pseudocode and the request processing should be done outside.

Optionally, if permissions allow it, I should be able to go directly to the file, and something like a raw content viewer would do:

fw.getComponentFile(blog_post_id)

and

components = fw.searchComponents(categories={‘blog’, ‘wiki’, ‘forum’}, regex=’/superframework/i’)

Anyway, the point is that I want components to work via code, not an admin interface.  I want all my content to be file  based and versioned, and I want components to access my content, and I want to do it via a MVC framework.  I want (pluggable/interchangable) hooks for things like authentication and persistence and helpers for things like session handling and database connections.  I want themes and layouts to be separate, and again code (file) based, though there’s nothing wrong with having an admin interface for selecting themes and layouts and components, and managing users, and publishing articles and other such workflow, though it’d be nice to have a nice UI for that admin interface.

SeaClear raster navigational charts

One of the coolest applications I’ve been meaning to play with for the past couple years is in a category near and dear to my heart, meaning it has to do with sailing.

Seaclear is a chart plotting tool (like the very expensive C-Maps) but uses only raster (bitmapped) as opposed to vector charts.

If you’re staying in US coastal waters, you can get just about every inch of charted water for free:

http://nauticalcharts.noaa.gov/mcd/Raster/download.htm

You can plug in a GPS to Seaclear and it will use the charts (images with datum information that allows it to plot latitude and longitude. It’s not recommended that you steer your ship by it, but it’s a very nice tool.

You could potentially scan your own paper charts and with calculated datum, use them as well. I’m surprised there’s not more of a market around this (at least I can’t find it) with things like admiralty charts.

I think it can take Maptech charts as well. It can read BSBs, because that’s what the NOAA charts are, but I’ve heard there may be DRM on newer Maptech charts.

I downloaded Seaclear yesterday and got practically all the NOAA charts for the Pacific Northwest, since that’s where I hope to be sailing this summer (but hopefully not next year.) I have my eye on a boat too (please don’t steal it.)

Seaclear looks nice, but could use some updates. It crashed once on me (on Vista) when I opened a new chart.

If Seaclear were open source, I’d start hacking on it right away. I wonder if the owner still maintains it, and if he’d like any help.

Other intersting chart tools:

Fugawi is relatively cheap and takes ENC (vector) charts as well as raster charts — so it sounds like a middle of the road compromise. It also google earth support so you can place your tracks in Google Maps. Very cool.

Also see:

http://free-navigation.blogspot.com/

http://destinsharks.com/category/google-earth-maps/

EarthNC.com

http://www.panbo.com/yae/archives/001475.html