301221 – Conceptual engineering 4


20220706 – Conceptual engineering links(3.1) Lecturers list(3.1) Lecturers list

(1) Conceptual Engineering Online Seminar


(2) Randomly chosen lecture from https://www.youtube.com/c/ConceptualEngineering

(3) Summer School – Conceptual Engineering, Experimental Methods and Politically Correct Language

(3.1) Lecturers listhttps://sites.google.com/view/summerschool-ce-and-xphi/home









HANS JOHANN GLOCK (UZH – University of Zurich)


ETHAN LANDES (UZH – University of Zurich)


MANUEL GUSTAVO ISAAC (UZH – University of Zurich)



NICOLE RATHGEB (UZH – University of Zurich)


KEVIN REUTER (UZH – University of Zurich)


PASCALE WILLEMSEN (UZH – University of Zurich)



What I cannot create, I do not understand” – Richard P. Feynman *1


*1 “What this means is we must take an approach that is realistic, and recognises our limitations *2. If we do that, we might find an approach which isn’t mathematically perfect but does actually give us better results because it doesn’t make false idealistic assumptions … “

Let’s illustrate what we mean by this. Imagine a very complex landscape with peaks and
troughs, and hills with treacherous bumps and gaps. It’s dark and you can’t see anything. You
know you’re on the side of a hill and you need to get to the bottom. You don’t have an accurate
map of the entire landscape. You do have a torch. What do you do? You’ll probably use the
torch to look at the area close to your feet. You can’t use it to see much further anyway, and
certainly not the entire landscape. You can see which bit of earth seems to be going downhill
and take small steps in that direction. In this way, you slowly work your way down the hill, step
by step, without having a full map and without having worked out a journey beforehand.”


*2 search tags: #Frege, #Hilbert, #Turing, #Church, #von Neumann


[1] Forward propagation, https://medium.com/analytics-vidhya/feed-forward-neural-networks-intuition-on-forward-propagation-f77468fad625

[2] Back propagation, https://www.analyticsvidhya.com/blog/2021/12/whats-happening-in-backpropagation/

[3] Make Your Own Neural Network, https://github.com/harshitkgupta/StudyMaterial/blob/master/Make%20Your%20Own%20Neural%20Network%20(Tariq%20Rashid)%20-%20%7BCHB%20Books%7D.pdf

[4] Make Your Own Neural Network, https://github.com/makeyourownneuralnetwork/makeyourownneuralnetwork

[5] Make your first GAN with PyTorch, https://www.educative.io/courses/make-your-first-gan-pytorch


(1)  “What is the difference between traditional machine learning algorithms and neural networks? The answer is something known as Inductive Bias”. The main advantage of neural networks is it’s Inductive Bias is very weak and hence, no matter how complex this relationship or function is, the network is somehow able to approximate it 

(2) “In theory, neural networks should be able to approximate any continuous function, however complex and non-linear it is”.

(3) “Non linearity is the key. Before we go further, we need to understand the power of non-linearity. When we add 2 or more linear objects like a line, plane or a hyperplane, the resultant is also a linear object: line, plane or hyperplane respectively. No, matter in what proportion we add these linear objects, we are still going to get a linear object.

But, this is not the case for addition between non-linear objects. when we add 2 different curves, we are probably going to get a more complex curve. This is shown in the below gist. If we could add different parts of these non-linear curves with different proportions, we should somehow be able to influence the shape of the resultant curve… In addition to just adding non-linear objects or let’s say “hyper-curves” like “hyperplanes”, we are also introducing non-linearity at every layer through these activation functions. which basically means, we are applying a non-linear function over an already non-linear object. And by tuning these biases and weights, we are able to change the shape of the resultant curve or function.”

(4) Summary, “One of the important entities in forward propagation is weights. We saw how tuning the weights can take advantage of the non-linearity introduced in each layer to leverage the resultant output. As we said we are going to randomly initialize the weights and biases and let the network learn these weights over time. Now comes the most important question. how are these weights going to be updated? how are the right weights and biases, that optimize the performance of the network to approximate the original relationship between x and y, going to be learned?

(5) “Using basic differential calculus, we come to the conclusion that whenever the value of x needs to be reduced, dy/dx is a positive number. whenever it needs to be increased, dy/dx is a negative number”.


Ak ťa oslovuje konzistencia, spoluutváraj túto stránku

Zadajte svoje údaje, alebo kliknite na ikonu pre prihlásenie:

WordPress.com Logo

Na komentovanie používate váš WordPress.com účet. Odhlásiť sa /  Zmeniť )

Twitter picture

Na komentovanie používate váš Twitter účet. Odhlásiť sa /  Zmeniť )

Facebook photo

Na komentovanie používate váš Facebook účet. Odhlásiť sa /  Zmeniť )

Connecting to %s