After George Boole’s introduction of an algebraic approach to logic, the subject morphed towards a more set theoretic formulation, with so called Boolean algebra initiated by John Venn and Charles Peirce. Venn diagrams (originally going back to Euler), give us a visual way of representing relations between subsets of a universal set. The operations of meet and join, or intersection and union, together with taking complements become replacements for the product and sum of the Algebra of Boole.

In this set theoretic context, de Morgan’s laws clarify how to compute complements of unions and intersections, and the two distributive laws involve 3 sets, and either a union of an intersection, or the intersection of a union. We illustrate how to verify these laws from, first of all a truth table perspective, and then with computations using the Algebra of Boole.

There is a heretical message here: professors teaching circuit analysis to engineers might want to start thinking about revamping their subject, and replacing Boolean algebra with the original, more powerful and simpler Algebra of Boole!

source

Comments are closed.

Ah yes like why SET, OsiRis & ISIS names functionally mean what they do. Esoteric(means to decipher reality through an AI prism as all constructs are by probability such) humor no charge.

Boole (5:38) lived from 1815 to 1864.

That 'circled-sum' symbol is never used in circuit analysis to represent inclusive-OR – it represents exclusive-OR always.

We circuit analysts routinely use the ring of AND and XOR when it's convenient. More often, the lattice is what's convenient. We learn the theory of the ring – and simply don't find it useful for most purposes. An exception is computer arithmetic, in which the most fundamental device is the 'half adder' – a device that adds two 1-bit numbers to produce a 2-bit result. The 'half adder' comprises, of course, an AND and an XOR. We learn to build a theory of integer arithmetic out of these – and then push that theory into a corner and learn about things like 'carry lookahead' so that adding or multiplying large numbers will not introduce large delays.

One thing that concerns us is the cost of implementation, either in the number of devices required or in the time required to compute a function. An AND or OR (we more often use NAND or NOR, because it's convenient to have a single operation that's algebraically complete) requires typically four transistors and introduces one 'gate delay' into the circuit.A multiple-input NAND or NOR scales linearly with the number of inputs and introduces no more delay relative to a single-input gate. (In other words, the lattice property of modularity reflects the physics of what the device actually computes)

An XOR is fundamentally a more complex operation. A two-input XOR roughly three times the device count and twice the delay as a two-input AND, OR, NAND or NOR. When adding inputs to an XOR gate, you can either choose to retain a two-unit delay, which requires scaling the device exponentially with the number of inputs, or you can resort to a delay of O(log(N)) (N being the number of inputs) and scale the size of the device as O(N).

You're right that XOR is the more powerful of the operations in terms of the complexity of a function that a single gate can represent – but fundamentally, what we care about is building a desired function using the minimum space or time. We reason algebraically, using the laws of the lattice, rather than those of the ring. We talk about Boolean functions represented in 'sum of products' or 'product of sums' form (equivalently 'conjuctive normal' and 'disjunctive normal' form), and use commutativity, associativity, distributivity, the identity elements, modularity, and De Morgan's laws. When we get into fault analysis, where there are various types of 'uncertain', 'undefined' or 'don't care' values, we even resort to intuitionistic Heyting algebras – which ought to be beloved of a superfinitist such as yourself!

In either ring or lattice notation, the problem of 'satisfiability' – 'does this Boolean function have a tuple of arguments that makes its output true?' – is NP-complete, so in the age where the large systems are all solved by computer, it's far from clear that either notation is 'easier to compute'.

We're not ignorant of the algebra of Boole. We just don't find it a convenient framework for what we do. We use another theory of precisely equivalent power, with an equally finitistic foundation, that gives us a notation that more closely matches the things we actually reason about.

Really excited that you've continued with this stuff on logic/reasoning and its relation to algebra and set theory!

I mentioned once before that E.T. Jaynes connects this all together with probability theory in his book

Probability Theory: The Logic of Scienceand argues (based on earlier arguments by Polya and Cox's theorem) that probability theory is an 'extension' of logic from strictly 0s and 1s to probability values in the interval [0, 1].I would be

superinterested if you ever did something on probability theory from your rational/finite point of view, perhaps even connecting it to your work on algebra, geometry, and your new work on calculus. (Jaynes more-or-less disliked infinite sets as well, and sometimes preferred to work in rational numbers in his book, so I think you might find his stance similar to yours, and would likely find his book a good read and perhaps even compelling.)P.S.: Just to note that when I chose my current user name, it was in part because of its relation to all of these topics, in addition to probability theory. The log-odds form of Bayes' theorem implicitly uses the hyperbolic tangent function and its inverse — to convert from log-odds to probabilities; or at least a linear transformation of probabilities from [0,1] to [-1,1], the latter being the min and max slopes of the hyperbolic tangent at -∞ and +∞ respectively.

I've even sketched the hyperbola (and various tangents/probabilities on it) out on paper, just to try to get an intuitive/geometrical feel for what it means in relation to Bayes' theorem and probability theory as 'extended logic'. I'm sure if a geometer such as yourself were to do something on these connections between probability theory and other math topics, it would be much more enlightening than my own amateur stumblings-around over the years.

Personally, I found Jaynes' book went from super-basic to this-is-really-cool to holy-cow-this-is way-over-my-head over the course of the book, which has actually sparked my interest to re-learn a lot of the maths I glossed over (cramming for exams, for example) in attaining my computer science undergrad degree. Much of my interest in your channel comes from that same spark: originally your rational trig, universal hyperbolic geometry, and projective geometry stuff.

Applications of Bayes' theorem (in other words, rigorous probability theory rather than the often

ad hocdevices of traditional/classical statistics) have yielded major advances in machine learning and artificial intelligence recently, as well as other areas such as science and even in philosophy (epistemology), which is why I'm so interested in it. I consider it as important a topic for humanity to get a handle on as any other foundational mathematical topic. I'm sure it could also be connected to and applied to many of the topics you cover on this channel, and potentially could play a part in your system of maths you've been developing, if you don't mind me speculating as such. 😅 Cheers! 😊BTW, it's completely counter-intuitive, but actually Charles Sanders Peirce's surname is pronounced identically to 'purse'. In other words, if you only heard his name and had to guess its spelling, you'd probably spell it Perce (like Percy, I suppose) instead of Peirce. It's such a common misunderstanding that the first thing on his Wikipedia page is this clarification about how to pronounce his surname. 😄🤷♂️

[Also note that the spelling is 'ei' instead of the typical 'ie' in, say, Pierce Brosnan, although most other people with the 'ei' spelling actually

dopronounce their names like Brosnan's name, so unfortunately even this unusual spelling is not a very good mnemonic device.]Complains about the unfirm foundations of analysis

Explains set theory with pictures.

Joking, respect! From Italy 🙂

We learned 'Carnaugh' maps along with boolean algebra in my electrical engineering deg. They always seemed mysterious tool. Will you be touching / relating to them ?

One advantage of using conjunction and disjunction rather than multiplication and addition is they work better together with quantifiers. Thus conjunction translates into the universal quantifier and disjunction translates into the existential quantifier. Multiplication is the same as conjunction so multiplication also translates into the universal quantifier, but addition translate into sum mod 2. One may consider sum mod 2 as a type of quantifier as long as there is a fixed total number of elements of interest, but it would be highly inconvenient. It is quite awkward to represent the universal quantifier and existence quantifier in terms of such a sum quantifier and the representation would dependent on the number of elements in the set. If the number of elements in the set is unknown or unbounded or infinite, the sum quantifier would break down.

Zzz..Zzz..Zzz…

p v q = 1 + (p+1)(q+1)

Will this new thinking make topology easier?

Professor when yo will post SCREENSHOT pdfs of your other lectures in your site !?

What are your thoughts on Category Theory? Is it real math or no?

here's a funny reddit post you might enjoy

https://www.reddit.com/r/math/comments/adz459/what_mathematical_objects_do_you_hate/