I like to scale an image on macOS with a pinch - gesture, which works well.
But the image is scaled always (and expectingly) centers.
What I need is that an Image is scaled with an anchor on the mousepointer. But I have no clue how to read the current mousePointer position.
What I have so far is:
GeometryReader { geo in
ScrollView([.horizontal, .vertical], showsIndicators: true) {
Image(nsImage: image!)
.resizable()
.scaledToFill()
.animation(.default)
**// Here the 'anchor' should have a pointer position **
.scaleEffect(setZoom(magnificationLevel: magnificationState.scale), anchor: UnitPoint.center)
.frame(width: geo.size.width, height: geo.size.height)
.onAppear() {
self.imageMagnificationState = CGFloat(1.0)
}
}
}
Is there any snippet for this every day task? How to get the cursor position? SwiftUI doc is very lame here.
Related
I have an array of images which I loop through, and display in a scrollView.
This is want I want to achieve:
I would like there to only be one visible image at a time. Like TikTok and instagram.
I would like the image to take up as much of the screen as possible, while still keeping the aspectRatio.
Center image
I don’t mind using .edgesIgnoringSafeArea([.top, .leading, .trailing])
This is the code I have so far. As you can see there is two images visible at a time. And even when there is only one image in the array, it isn’t centered, but clings to the top of the screen.
Any ideas on how to solve this?
ScrollView(.vertical) {
ForEach(allRetrievedMedia, id: \.self) { item in
switch(item) {
case .image(let img):
Image(uiImage: img)
.resizable()
.aspectRatio(contentMode: .fit)
.frame(alignment: .center)
}
}
}
Embed the image in a container and add .frame(width: screenSize.width, height: screenSize.height, alignment: .center) to that container. Then add .scaledToFill() method to the image itself. This should do the trick, here's a snippet to help...
struct ContentView: View {
let screenSize: CGRect = UIScreen.main.bounds
var body: some View {
ScrollView(.vertical) {
ForEach(allRetrievedMedia, id: \.self) { item in switch(item) {
case .image(let img):
VStack {
Image(uiImage: img)
.resizable()
.scaledToFill()
}.frame(width: screenSize.width, height: screenSize.height, alignment: .center)
}
}
}
}
Haven't tried that myself, but should work like a charm. Hope it helps
I'm trying to code a simple layout design with SwiftUI without success!
Here's what I'd like to do:
ScrollView {
VStrack {
// Header with an orange background.
// This orange color should also apply to the status bar.
}
VStrack {
// Content with a white background.
// This white color should always go to the bottom.
}
}
I first tried to apply an orange background to the first VStack and a white background to the second VStack but I couldn't color the status bar in orange even with .ignoresSafeArea().
Then I tried to apply an orange background to the ScrollView and a white background again to the second VStack but I couldn't color the bottom of the screen in white even with .infinity.
I also tried to use LazyVStack but nothing happened.
Do you guys have any idea to make this works? :-)
You can try setting the height of the content to be at least the screen height:
.frame(minHeight: UIScreen.screenHeight)
Here's the sample code of the full view's body (you can replace the HStacks with VStacks, as long as you fill the width):
ScrollView {
HStack {
Spacer()
Text("Header")
Spacer()
}
.padding()
.background(.orange)
HStack {
Spacer()
Text("Content")
Spacer()
}
.padding()
.frame(minHeight: UIScreen.screenHeight, alignment: .top)
.background(.white)
}
.background(Color.orange.ignoresSafeArea(.all, edges: .top))
I'm developing MacOS App with SwiftUI.
Let's say we have a fixed size window, which will show a scrollable image.
Now I want to create a feature which can let user to scale the image, and can scroll and view the scaled image inside the window.
The problem is, once I scaled up the image, the scroll area seems not expended together, and the scroll area is not big enough to reach each corner of the image.
struct ContentView: View {
var body: some View {
ScrollView([.horizontal, .vertical] , showsIndicators: true ){
Image("test")
.scaleEffect(2)
}
.frame(width: 500, height: 500)
}
}
I have tried to set the Image's frame by using GeometryReader, but got the same result.
MacOS 11.1, Xcode 12.3
Thanks!
.scaleEffect seems to perform a visual transform on the view without actually affecting its frame, so the ScrollView doesn't know to accommodate the bigger size.
.resizable() and .frame on the image seem to do the trick:
struct ContentView: View {
#State private var scale : CGFloat = 1.0
var body: some View {
VStack {
ScrollView([.horizontal, .vertical] , showsIndicators: true ){
Image(systemName: "plus.circle")
.resizable()
.frame(width: 300 * scale,height: 300 * scale)
}
.frame(width: 500, height: 500)
Slider(value: $scale, in: (0...5))
}
}
}
I want to make a scrolling app with a gradient background. As user scrolls - background color changes.
For example, the bottom is black and the top is white I would specify the height of a VStack for 8000, and for this height, as the user scrolls the screen he will see the color change.
I didn't find any solution. Tried making LinearGradient for VStack and Rectangle figures for its full height, but it only covers the phone screen size (can't fill all 8000 height points). So if I scroll upwards, the gradient is lost and it only displays the black screen.
Thank you]1
You can achieve this using a rectangle within a ZStack:
struct ContentView: View {
var body: some View {
ScrollView {
ZStack(alignment: .top) {
// Example width and height. Width should be the width of the device
Rectangle().frame(width: 10000, height: 2000).foregroundColor(.clear).background(LinearGradient(gradient: Gradient(colors: [.white, .black]), startPoint: .top, endPoint: .bottom))
// Add your actual content here:
Text("Yeet")
}
}
}
}
I just learning Apple's SwiftUI and it is a bit frustrating even doing the basics. I am trying to get my image to the top right of my screen but the default has it showing up in the center of my screen. Anyone know how I can accomplish this. In addition, I tried to get the image to resize based on the screen size but the old self.frame.width that I used to use in regular Swift doesn't work in Swift UI. Sorry, but this new way of writing code in SwiftUI is very odd to me.
var body: some View {
ZStack {
//Define a screen color
LinearGradient (gradient: Gradient(colors:[Color(ColorsSaved.gitLabDark),Color(ColorsSaved.gitLabLight)]),startPoint: .leading,endPoint: .trailing)
//Extend the screen to all edges
.edgesIgnoringSafeArea(.all)
VStack(alignment: .trailing) {
Image("blobVectorDark")
.resizable()
.edgesIgnoringSafeArea(.top)
.frame(width: 60, height: 60)
// .offset(x: , y: 20)
}
}
}
}
You need to add a frame for the outer container, here is the VStack.
Then assign the alignment for this container.
The width and height in the frame of VStack needs to use geometryProxy.
GeometryReader{ (proxy : GeometryProxy) in // New Code
VStack(alignment: .trailing) {
Image("blobVectorDark")
.resizable()
.edgesIgnoringSafeArea(.top)
.frame(width: 60, height: 60)
// .offset(x: , y: 20)
}
.frame(width: proxy.size.width, height:proxy.size.height , alignment: .topLeading) // New Code
}
I'm learning too so this might not be the ideal way to do it, but I did manage to get the image to pin to the top right of the screen.
You can get the screen width from GeometryReader. The alignment was driving me nuts for a while, until I realized I could give a frame to a Spacer to take up the other side of an HStack, and then it worked.
var body: some View {
GeometryReader { geometry in
VStack {
HStack {
Spacer()
.frame(width: geometry.size.width / 2)
Image("blobVectorDark")
.resizable()
.aspectRatio(contentMode: .fit)
.frame(width: geometry.size.width / 2)
}
Spacer()
}
}
.edgesIgnoringSafeArea(.all)
}