In Xcode 6 beta 7, my warnings are divided into two sections, for example if one is called "my app" the other would be called "my app project". I have fixed all warnings I have received in the "my app" section ( there were only 3-4). On the "my app project" section however, I have over 42,000 warnings! It has been this way since before I even wrote any code and created my empty project. I always assumed this was a beta issue but I would like to know if everyone else is experiencing the same thing. I am unable to even look at what the problems are as Xcode freezes when I try due to the overwhelming number of warnings.
The release notes provide some details into the errors, seems to be the evolution of Swift:
http://adcdownload.apple.com//Developer_Tools/xcode_6_beta_7_apzr94/xcode_6__beta_7_release_notes.pdf
From the release notes:
Swift Language
• A large number of Foundation, UIKit, CoreData, SceneKit, SpriteKit, Metal APIs have been audited for optional conformance, removing a significant number of implicitly unwrapped optionals from their interfaces. This clarifies the nullability of their properties, arguments and return values of their methods. This is an ongoing effort that started shipping in beta 5.
These changes replace T! with either T? or T depending on whether the value can be null or not null, respectively. If you find a case that was changed incorrectly, please file a radar and include the tag ‘#IUO’ in the subject line. Please do not file feature requests about APIs that are still marked as T!, we know about them.
If you encounter a method for which the return value is incorrectly considered non-nullable, or a property that is incorrectly considered non-nullable, you can work around the problem by immediately wrapping the result in an optional:
var fooOpt: NSFoo? = object.reallyMightReturnNil()
if let foo = fooOpt { ... }
I haven't had any issues like that in any of the releases including Xcode 6 Beta 7. Each beta release I usually end up with about 100 or so errors due to changes in syntax but they only take half an hour or so to go through and resolve.
Related
This question already has answers here:
How do I fix "Expression requiring global actor 'MainActor' cannot appear in default-value expression of property '_audioPlaybackManager'"?
(2 answers)
Closed 11 months ago.
Xcode keeps throwing a complier error that it doesn't comply with Swift 6 Which isn't obv out yet. It only started when I downloaded 13.3 Xcode.
The error reads:
Expression requiring global actor 'MainActor' cannot appear in default-value expression of property '_api'; this is an error in Swift 6
my code is doing an state object call like so:
#StateObject var api = RecipeAPI()
I have debugged and closed and reopened everything but nothing changes. Reported it to apple too.
Best,
Imran
That's a warning in Swift 5. In Swift 6, it'll be an error. It's suggesting that you fix it now, but in the future it'll break the build. You're requiring that everything (including the init be run on the MainActor, but this assignment isn't promised to run on the MainActor itself (possibly because the type it's part of isn't also marked #MainActor). You may find a lot of these in 5.6 that are extremely difficult to fix. They're still tuning the warnings. (So when you start to formulate your next question "how do I eliminate all Swift concurrency warnings when dealing with SwiftUI and Foundation," the answer may be "you can't, or at least not in any easy way." It depends on the exact problem.)
When building my project in Xcode 8 GM, Xcode sticks on "Compiling Swift source files". It also never finishes indexing my project at any point.
I've looked at the similar questions but none of those answers work for me.
Does anyone know what this could be?
Swift inference was the problem.
There were several instance where I was inferring a dictionary type that for some reason grew exponentially as the dictionary had more values.
If you used lot of Concatination of string like
var fun=0;
var tempvalue=2;
var result="some data"+fun+" more data"+tempvalue;
Transform it to
var result="somedata \(fun) more data \(tempvalue)";
Because swift compiler take lots of time to analyse overloding methods of "+" operator
Clean Build Folder (Cmd+Shift+Opt+K) usually helps me in this case, but time to time issue returns.
I think several people has experience this problem, and that's with enums.
So the problem is quite simple, Xcode handles indentation after enums quite strange, at first I thought it was my syntax, but it turns out that it's probably not (I'm not ruling that out just yet)
So here's what my code look like:
enum Signs : bool {
Positive = true,
Negative = false
};
<This is where Xcode suggest where my next line of code should be, which is one tab more then I expect.>
Anyone seen this problem and solved it?
This bug can be replicated 100% of the time by specifying a value for one of the enum members.
If you omit the = true, automatic indentation will perform correctly. This goes for any enum with any number of enum items. As long as one item has = someValue, the closing brace will be misaligned.
it's 2018 and the developers of XCode still haven't fixed this bug in Xcode 9.2. So, please file bug reports (Help menu -> Report an Issue), now that the method for reproducing it every time has been discovered.
I usually write my enums like this:
//Using your example
typedef enum {
kPositive,
kNegative
}signs;
There you won't have the indentation problem. But I can't answer why Xcode behaves like that.
I have a vb6 app that uses Word interop to create a few reports. In the introduction of these reports, there are some instructions in 4 textboxes around an image.
Recently and suddenly the top two textboxes started appearing on the next page, and I can't figure out why. When I step through the code and watch the word document getting built, everything positions itself correctly, however, if I compile the application, the error reappears.
Any suggestions?
Use late-bound calls to Word. This does not mean to remove reference to Microsoft Word Xxx Object Library, just alter your Dims like this
Dim oWord As Object '--- was Word.Application'
Dim oDoc As Object '--- was Word.Document'
...
oDoc.Protect wdAllowOnlyReading '--- keep using enums'
Could it be some 'rounding' difference? For instance if you compare two float point values for equality, the result can subtly depend on the specific compiler/interpreter implementation.
I would like to suggest to trim down your code to the minimum showing the different behaviors. That might clear things up already. If not, please post it here to let us help you.
Maybe you are running the compiled version as a different user than the one running VB when you debug? Maybe this could cause what you are describing, if the two users have some different Word settings.
Is it possible that the compiled version finds a different version of the .dot file?
It may be very helpful if you show the code you use to create the Word document, because then someone here might notice something that can be sensible to moving to a compiled version.
Do you have any code in events that rely on timing, such as Form_Activate, Load, or Unload? I've seen those things behave very differently when stepping through code and when compiled, especially on newer, faster machines.
Let me pose a bit of background information before asking my question:
I recently joined a new software development group that uses Rational tools for configuration management, including a source control and change management system.
In addition to these tools, the team has a standard practice of noting any code changes as a comment in the code, such as:
///<history>
[mt] 3/15/2009 Made abc changes to fix xyz
///</history>
Their official purpose for the commenting standard is that "the comments provide traceability from requirement to code modification".
I am preparing to pose an argument that this practice is unnecessary and redundant; that the team should get rid of this standard immediately.
To wit - the change management system is the place to build traceability from requirement to code modification, and source control can provide detailed history of changes by performing a Diff between versions. When source code is checked in, the corresponding change management ticket is noted. When a CM ticket is resolved, we note which source code files were modified. I believe this provides a sufficient cross-reference for the desired traceability.
I would like to know if anyone disagrees with my argument. Am I missing some benefit of commented source code history that change management and source control systems cannot provide?
For myself, I have always found such comments to be more trouble than they're worth: they can cause merge conflicts, can appear as 'false positives' when you're trying to isolate the diffs between two versions, and may reference code changes that have since been obsoleted by later changes.
It's often (not always, but often) possible to change version-control systems without losing metadata. If you were to move your code to a system that doesn't support this, it would not be hard to write a script to convert the change history into comments before the cutover.
A comment allows you to find all the changes and their reasons in the code right where they are relevant without having to dig into diffs and version control system intricacies. Furthermore, should you decide to change of version control system, the comments will stay.
I worked on a large project with similar practice that had changed of source control system twice. There wasn't a day when I wasn't glad to have these comments.
Is it redundant? Yes.
Is it unnecessary? No.
I've always thought that code should be, of course, under version control, and that the current source code (the one that you can open and read today) should be valid only in present tense.
It doesn't matter if a report could have up to 3 axis in the past and last month you updated it to support up to 6 axis. It doesn't matter if you expanded some function or fixed some bug, as long as the current version can be easily understood. When you fix a bug, just leave the fixed code.
There's an exception, though. If (and only if) the fixed code looks less intuitive to you than the previous, incorrect one; if you feel that someone might come tomorrow and, just by reading the code, be tempted to change it back to what "seems more correct", then it's good to add a comment: "This is done this way to avoid... blah blah blah." Also, if the problem behind is an infamous war story inside the team's culture, or if for some reason the bug report database contains very interesting information about this part of the code, I wouldn't find it incorrect to add "(see Bug Id 10005)" to the explaining comment.
The one that jumps to mind to me is vendor lockin. If you ever moved away from Rational, you'd need to make sure that the full change history was maintained during the migration - not just the version of the artifacts.
When you're in the code you need to know why it's structured like that, hence in code commenting. Tools that sit outside the code, good though they may be, require far too much of a context shift in your brain to be useful. As well as that, trying to reverse engineer the code intent from documentation and a diff is pretty damn hard, I'd much rather read a line of comment any day.
There was a phase in the code I work on, back in the 1994-96 time frame, where there was a tendency to insert change history comments at the top of the file. Those comments are now meaningless and useless, and one of the many standard cleanups I perform when editing files containing such comments is to remove them.
In contrast, there are also some comments with a bug number at the location where the change is made, typically explaining why the ridiculous code is as it is. These can be very helpful. The bug number gives you somewhere else to look for information, and fingers the culprit (or victim - it varies).
On the other hand, items like this one - genuine; cleaned up last week - make me grit my teeth.
if (ctab->tarray && ctab->tarray[i])
#ifndef NT
prt_theargs(*(ctab->tarray[i]));
#else
/* Correct the parameter type mismatch in the line above */
prt_theargs(ctab->tarray[i]);
#endif /* NT */
The NT team got the call correct; why they thought it was a platform-specific fix is beyond me. Of course, if the code had used prototypes instead of just parameterless declarations before now, then the Unix team would have had to fix the code too. The comment was a help - assuring me that the bug was genuine - but exasperating.