archives

« Bugzilla Issues Index

#178 — Must settle scoping details for block-scoped bindings


As I recall, there were three possibilities recently raised verbally that are candidates for consensus. The first, which I believe corresponds to the current proposal on the wiki, has the following characteristics:

"letrec" semantics with dynamic dead zone

1) Function declarations bring their variable into scope throughout their containing block.
2) The initialized function variable is initialized hoisted to the beginning of that block.
3) Const, and let declarations all bring their variable into scope throughout their containing block.
4) Const and let variables are initialized at the point of their declaration.
5) Once initialized, const and let variables are live.
6) An attempt to read a const or let variable before it's live should throw a ReferenceError (or TypeError?).
7) An attempt to assign to a let variable before it's live should throw a ReferenceError (or TypeError?).
8) Any attempt to assign to a const variable should be an early error (SyntaxError?).

"letrec" semantics with static and dynamic dead zone

9) Like '"letrec" semantics with dynamic dead zone' except that, if a const or let variable is used in its block textually "before" its declaration, that's an early error. "before" means, in earlier statements and declarations, and in the right hand side of the variable's own declaration.

"let*" semantics with dynamic dead zone

3) Const and let declarations bring their variable in scope starting from their point of declaration through the end of their containing block, but not including the right side of that declaration.
1,2,4-8) All the rest are like letrec.


I favor '"letrec" semantics with dynamic dead zone'.


The main arguments I see for the dynamic dead zone semantics (which essentially views a block as one big recursive binding) are:

1.) let* (or "C") semantics for `let' and `const' is inconsistent with the existing lifting semantics for `function', whose scope stretches over the whole block.

2.) letrec with static dead zone does not detect the general case, while also ruling out some perfectly useful programs (e.g., recursive or mutually recursive object definitions via let or const).

3.) Binding semantics should be consistent between local blocks and module/global scope. Since modules are recursive, this pretty much mandates letrec semantics with no static restriction.


0)formal parameters are treated as if they were let declarations that bring their binding in scope prior to step 1. formal parameters (including parameters bound within destructuring patterns) are bound in left to right order. If a formal parameter has an initializer, the parameter is only considered initialized after the evaluation of the initializer. (ie, it is an early error for an initializer to reference the parameter it is initializing or any parameter that is defined to the right of the initializer.)


(In reply to comment #1)
> The main arguments I see for the dynamic dead zone semantics (which essentially
> views a block as one big recursive binding) are:
>
> 1.) let* (or "C") semantics for `let' and `const' is inconsistent with the
> existing lifting semantics for `function', whose scope stretches over the whole
> block.
>
> 2.) letrec with static dead zone does not detect the general case, while also
> ruling out some perfectly useful programs (e.g., recursive or mutually
> recursive object definitions via let or const).

but dynamic deadzones don't help with mutually recursive objects either:

const obj1 = {other: obj2}; //dynamic error on reference to obj2
const obj2 = {other: obj1};

the error in the definition of obj1 might as well be static

It is only uplevel references from within inner functions that benefit need to use a dynamic deadzone


>
> 3.) Binding semantics should be consistent between local blocks and
> module/global scope. Since modules are recursive, this pretty much mandates
> letrec semantics with no static restriction.

Can you give an example with static would be a problem for the top level WRT modules?


(In reply to comment #3)
> (In reply to comment #1)
> > The main arguments I see for the dynamic dead zone semantics (which essentially
> > views a block as one big recursive binding) are:
> >
> > 1.) let* (or "C") semantics for `let' and `const' is inconsistent with the
> > existing lifting semantics for `function', whose scope stretches over the whole
> > block.
> >
> > 2.) letrec with static dead zone does not detect the general case, while also
> > ruling out some perfectly useful programs (e.g., recursive or mutually
> > recursive object definitions via let or const).
>
> but dynamic deadzones don't help with mutually recursive objects either:
>
> const obj1 = {other: obj2}; //dynamic error on reference to obj2
> const obj2 = {other: obj1};
>
> the error in the definition of obj1 might as well be static
>
> It is only uplevel references from within inner functions that benefit need to
> use a dynamic deadzone

Yes. The example I had in mind is one where the recursion goes through methods:

const x = {f() { ...y...}};
const y = {g() { ...x...}};

Now, I think what you had in mind is to make a distinction between references in the same scope, and references in a nested (function) scope. But that seems like even more complication for minor benefit.

> > 3.) Binding semantics should be consistent between local blocks and
> > module/global scope. Since modules are recursive, this pretty much mandates
> > letrec semantics with no static restriction.
>
> Can you give an example with static would be a problem for the top level WRT
> modules?

When you have recursive modules, all module bodies together basically form one big letrec. So treating their bindings as sequential isn't very meaningful. A per-module static dead zone wouldn't be unsound, but it would be an even less reliable analysis than in function scope.


These issues were resolved in at TC39 meetings and the resolutions are reflected in the revisions 4 spec. draft.